Skip site navigation (1) Skip section navigation (2)

Re: User-facing aspects of serializable transactions

From: "Kevin Grittner" <Kevin(dot)Grittner(at)wicourts(dot)gov>
To: "Greg Stark" <stark(at)enterprisedb(dot)com>
Cc: "Heikki Linnakangas" <heikki(dot)linnakangas(at)enterprisedb(dot)com>, "Jeff Davis" <pgsql(at)j-davis(dot)com>, "<pgsql-hackers(at)postgresql(dot)org>" <pgsql-hackers(at)postgresql(dot)org>
Subject: Re: User-facing aspects of serializable transactions
Date: 2009-05-28 15:33:30
Message-ID: (view raw, whole thread or download thread mbox)
Lists: pgsql-hackers
Greg Stark <stark(at)enterprisedb(dot)com> wrote:
> Once again, the type of scan is not relevant. it's quite possible to
> have a table scan and only read some of the records, or to have an
> index scan and read all the records.
> You need to store some representation of the qualifiers on the scan,
> regardless of whether they're index conditions or filters applied
> afterwards. Then check that condition on any inserted tuple to see
> if it conflicts.
> I think there's some room for some flexibility on the "not
> absolutely necessary" but I would want any serialization failure to
> be justifiable by simple inspection of the two transactions. That
> is, I would want only queries where a user could see why the
> database could not prove the two transactions were serializable even
> if she knows they don't. Any case where the conditions are obviously
> mutually exclusive should not generate spurious conflicts.
> Offhand the problem cases seem to be conditions like "WHERE
> func(column)" where func() is not immutable (I don't think STABLE is
> enough here). I would be ok with discarding conditions like this --
> if they're the only conditions on the query that would effectively
> make it a table lock like you're describing. But one we could
> justify to the user -- any potential insert might cause a
> serialization failure depending on the unknown semantics of func().
Can you cite anywhere that such techniques have been successfully used
in a production environment, or are you suggesting that we break new
ground here?  (The techniques I've been assuming are pretty well-worn
and widely used.)  I've got nothing against a novel implementation,
but I do think that it might be better to do that as an enhancement,
after we have the thing working using simpler techniques.
One other note -- I've never used Oracle, but years back I was told by
a fairly credible programmer who had, that when running a serializable
SELECT statement you could get a serialization failure even if it was
the only user query running on the system.  Apparently (at least at
that time) background maintenance operations could deadlock with a
SELECT.  Basically, I feel that the reason for using serializable
transactions is that you don't know what concurrent uses may happen in
advance or how they may conflict, and you should always be prepared to
handle serialization failures.

In response to


pgsql-hackers by date

Next:From: Simon RiggsDate: 2009-05-28 15:36:10
Subject: Re: Clean shutdown and warm standby
Previous:From: Jignesh K. ShahDate: 2009-05-28 15:29:53
Subject: Re: sun blade 1000 donation

Privacy Policy | About PostgreSQL
Copyright © 1996-2017 The PostgreSQL Global Development Group