Re: Revisited: Transactions, insert unique.

From: Lincoln Yeoh <lylyeoh(at)mecomb(dot)com>
To: Ed Loehr <eloehr(at)austin(dot)rr(dot)com>, Joachim Achtzehnter <joachim(at)kraut(dot)bc(dot)ca>
Cc: pgsql-general(at)postgresql(dot)org, Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us>
Subject: Re: Revisited: Transactions, insert unique.
Date: 2000-04-27 03:08:51
Message-ID: 3.0.5.32.20000427110851.00a9fdd0@pop.mecomb.po.my
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-general

At 11:59 AM 26-04-2000 -0500, Ed Loehr wrote:
>Joachim Achtzehnter wrote:
>>
>> In a message to Ed Loehr and pgsql-general, Lincoln Yeoh wrote:
>> > Then you do a commit on both, and you end up with two rows.
>>
>> This is dissapointing indeed! What this means is that Postgresql
>> transactions are, in fact, not truely serializable. The concurrent

Well, it's not that disappointing as it's not easily avoidable given that
transactions usually consist of multiple consecutive nonzerotime order
sensitive statements. :)

So the database has to figure out how much information to share amongst
partially completed transactions in order to achieve maximum
serialisation/atomicity.

There will also be transactions which to be serialised must actually occur
serially. That is a whole transaction has to go through before the other
one (or every other one), or at least a group of statements within the
transaction.

Stuff like select for update gives the database engine some hints, but in
the case I mentioned it is not enough (unless modified).

Basically the database needs to get a need-to-know summary of the
transaction in a single atomic statement, then it can better decide what to
block. The application is in the best position to give this summary.

So in the end for maximum serialisation it may be a good idea to have an
arbitrary lock mechanism to cover the troublesome cases.

e.g.
getlock('arbitrary string'[,timeout]);
getlock('all temporary product tables, created or about to be')
getlock('all rows potential or real on all tables where key1=X or key2=Y')
If another transaction tries to get the same lock, it will wait for timeout
then abort with an error.
(possible timeout values, -1= never, 0 or greater=time to wait in seconds).

Ok so I'm biased to how MySQL does it (it's simple and has a good chance of
working well). Yes it shifts a lot to the application. But if people have
to do things like do their multiple select for updates in the right order
(to prevent deadlocks), they might as well start using something like this
instead (or fix their architecture if possible ;) ).

And it's likely to be faster! Anyone else here like this arbitrary lock
thingy?

I'm very interested to know of other ways to achieve good serialisation,
especially database centric methods.

Cheerio,

Link.

p.s. Would anyone actually need timeouts of a day (86400) or greater?

In response to

Responses

Browse pgsql-general by date

  From Date Subject
Next Message Hiroshi Inoue 2000-04-27 03:46:06 RE: Revisited: Transactions, insert unique.
Previous Message Ed Loehr 2000-04-27 03:05:56 Re: Revisited: Transactions, insert unique.