Re: determine snapshot after obtaining locks for first statement

From: "Kevin Grittner" <Kevin(dot)Grittner(at)wicourts(dot)gov>
To: "Greg Stark" <gsstark(at)mit(dot)edu>
Cc: "Markus Wanner" <markus(at)bluegap(dot)ch>, "Robert Haas" <robertmhaas(at)gmail(dot)com>, <pgsql-hackers(at)postgresql(dot)org>,"Tom Lane" <tgl(at)sss(dot)pgh(dot)pa(dot)us>
Subject: Re: determine snapshot after obtaining locks for first statement
Date: 2009-12-17 16:59:48
Message-ID: 4B2A0F24020000250002D6EB@gw.wicourts.gov
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-hackers

Greg Stark <gsstark(at)mit(dot)edu> wrote:
> So I for multi-statement transactions I don't see what this buys
> you.

Well, I became interested when Dr. Cahill said that adding this
optimization yielded dramatic improvements in his high contention
benchmarks. Clearly it won't help every load pattern.

> You'll still have to write the code to retry, and postgres
> retrying in the cases where it can isn't really going to be a
> whole lot better.

In my view, any use of a relational database always carries with it
the possibility of a serialization error. In other database
products I've run into situations where a simple SELECT at READ
COMMITTED can result in a serialization failure, so in my view all
application software should use a framework capable of recognizing
and automatically recovering from these. I just try to keep them to
a manageable level.

> people might write a single-statement SQL transaction and not
> bother writing retry logic and then be surprised by errors.

As has often been said here -- you can't always protect people from
their own stupidity.

> I'm unclear why serialization failures would be rare.

Did I say that somewhere???

> It seems better to report the situation to the user all the time
> since they have to handle it already and might want to know about
> the problem and implement some kind of backoff

The point was to avoid a serialization failure and its related
rollback. Do you think we should be reporting something to the
users every time a READ COMMITTED transaction blocks and then picks
the updated row? (Actually, given that the results may be based on
an inconsistent view of the database, maybe we should....)

> This isn't the first time that we've seen advantages that could be
> had from packaging up a whole transaction so the database can see
> everything the transaction needs to do. Perhaps we should have an
> interface for saying you're going to feed a series of commands
> which you want the database to repeat for you verbatim
> automatically on serialization failures. Since you can't construct
> the queries based on the results of previous queries the database
> would be free to buffer them all up and run them together at the
> end of the transaction which would allow the other tricky
> optimizations we've pondered in the past as well.

How is that different from putting the logic into a function and
retrying on serialization failure? Are you just proposing a more
convenient mechanism to do the same thing?

-Kevin

In response to

Browse pgsql-hackers by date

  From Date Subject
Next Message Tom Lane 2009-12-17 16:59:59 Re: determine snapshot after obtaining locks for first statement
Previous Message Scott Bailey 2009-12-17 16:38:30 Re: Range types