Skip site navigation (1) Skip section navigation (2)

Re: User-facing aspects of serializable transactions

From: "Kevin Grittner" <Kevin(dot)Grittner(at)wicourts(dot)gov>
To: "Tom Lane" <tgl(at)sss(dot)pgh(dot)pa(dot)us>
Cc: <Greg Stark <greg(dot)stark(at)enterprisedb(dot)com>, "Jeff Davis" <pgsql(at)j-davis(dot)com>, "<pgsql-hackers(at)postgresql(dot)org>" <pgsql-hackers(at)postgresql(dot)org>
Subject: Re: User-facing aspects of serializable transactions
Date: 2009-05-28 22:49:19
Message-ID: (view raw, whole thread or download thread mbox)
Lists: pgsql-hackers
Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us> wrote:
> The fundamental problem with all the "practical" approaches I've
> heard of is that they only work for a subset of possible predicates
> (possible WHERE clauses).  The idea that you get true
> serializability only if your queries are phrased just so is ...
> icky.  So icky that it doesn't sound like an improvement over what
> we have.
I've never seen or heard of a production system which only gives you
serializable guarantees for some WHERE clauses.  What I have always
seen is multi-granularity locking, where locks are based on indexes or
rows accessed -- essentially letting the DBMS figure out what rows the
predicate covers by seeing what it examines.  If too many locks accrue
at a fine granularity, they are replaced with a lock at a coarser
There have been papers published on the technique for decades, and it
has been used in popular databases for almost as long.  The only
objection, outside of aesthetic ones, raised so far is that we don't
know of anyone using this approach with some of the innovative index
techniques available in PostgreSQL.  I don't believe that means that
it can't be done.
Well, OK -- there is another objection -- that using this technique
creates locking on a less-than-surgically-precise set of data, leading
to blocking and/or serialization failures which would not happen with
a theoretically ideal implementation of predicate locks.  The problem
is that the cost of a "perfect" predicate locking system is much
higher than the cost of letting some transaction block or roll back
for retry.
If someone has an approach to predicate locking which retains
precision in lock scope without excessive cost, I'm more than willing
to use it.  Frankly, the fact that someone came up with a way to *use*
predicate locks to implement serializable transactions on top of MVCC,
without blocking beyond what's already there to support snapshot
isolation, has me believing that there could be more surprises around
the corner.
I do think that it might be best to get an initial implementation
using "conventional" locking, and *then* consider the fancier stuff. 
That would allow an approach which has surgical precision for a subset
of WHERE clauses to be used where it can be, with the fall-back being
broader (but not ridiculous) conventional locks where the technique
can't be used, rather than falling back to failure of serializable

In response to


pgsql-hackers by date

Next:From: Tom LaneDate: 2009-05-28 22:52:21
Subject: Re: pg_migrator and an 8.3-compatible tsvector data type
Previous:From: Greg StarkDate: 2009-05-28 22:42:45
Subject: Re: pg_migrator and an 8.3-compatible tsvector data type

Privacy Policy | About PostgreSQL
Copyright © 1996-2017 The PostgreSQL Global Development Group