On Sep 22. 1999 at 11:13, Tom Lane wrote about Re: [SQL] [GENERAL]
> Stuart Rison <rison(at)biochemistry(dot)ucl(dot)ac(dot)uk> writes:
> > Now I've sussed this out as the update must be dealing with rows where b=4
> > first, trying to UPDATE them to b=3 and failing because these are already
> > 'occupied'.
> > Is this a bug? is this a feature? is there a way of telling the UPDATE
> > what order to UPDATE in?
> It's a bug, but I doubt it's likely to get fixed any time soon,
> since unique indexes would get a LOT slower if they did this
> "right". As you say, the update must be hitting some of the b=4
> rows first. At that point, the system has no idea that the existing
> b=3 rows are going to be invalidated in the same transaction, so its
> uniqueness check fails. To do this right, the uniqueness check
> would have to be applied at the end of the transaction, which would
> mean remembering every tuple that's been inserted/updated in the
> current transaction and going back to check its uniqueness.
> There is a running discussion on the hackers list about implementing
> referential integrity constraints, which have this same problem that
> you can't really apply (some kinds of) checks until you see the
> entire transaction results. It'll get done but it's not simple.
> I'm not sure whether anyone is thinking of applying that mechanism
> to unique- index checks...
Well, my silly mind (not being very into the actual implementation of
PostgreSQL) suggest, not a solution, but a temporary fix that you do
as you normally do as long as there's no conflicts (i.e. do the check
before each tuple), but when you reach a conflict, you simply remember
the conflicting tuple until the end of the transaction, and then
recheck for conflicts when the transaction has ended, but only on the
remembered (and thus previously conflicting) tuples.
If, at that time, the conflicts has been removed, the transaction is
okay, otherwise it generates an error (and does a rollback). This has
the implication that the complete transaction has to be able to be
rolled back anyways (previously "good" tuples should not be altered as
the complete transaction is not okay), so perhaps this is not a good
idea after all.
But on the other hand, your description (hitting some og the b=4 rows
first) suggests that checks are done before altering each tuple (on
that specific tuple), and not on the entire transaction before
commiting any alterations, meaning that if anything goes wrong, the
previous altered tuples will have to be rolled back, and thus it seems
that there is already a knowledge of which tuples was altered (and
I hope I made myself clear, and most likely my input is silly, but as
stated above I don't know a lot about the implementation of
pgsql-general by date
|Next:||From: Michael Simms||Date: 1999-09-22 23:36:58|
|Subject: Re: [GENERAL] data type "serial"|
|Previous:||From: Herouth Maoz||Date: 1999-09-22 16:06:07|
|Subject: Re: [INTERFACES] Q: Mac: Openlink->Linux: Openlink-PostgreSQL|