Re: Serializable snapshot isolation patch

From: "Kevin Grittner" <Kevin(dot)Grittner(at)wicourts(dot)gov>
To: "Robert Haas" <robertmhaas(at)gmail(dot)com>
Cc: "Jeff Davis" <pgsql(at)j-davis(dot)com>,<pgsql-hackers(at)postgresql(dot)org>
Subject: Re: Serializable snapshot isolation patch
Date: 2010-10-20 15:06:14
Message-ID: 4CBEBF160200002500036B7E@gw.wicourts.gov
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-hackers

Robert Haas <robertmhaas(at)gmail(dot)com> wrote:
> On Tue, Oct 19, 2010 at 6:28 PM, Kevin Grittner
> <Kevin(dot)Grittner(at)wicourts(dot)gov> wrote:
>> One thing that would work, but I really don't think I like it, is
>> that a request for a snapshot for such a transaction would not
>> only block until it could get a "clean" snapshot (no overlapping
>> serializable non-read-only transactions which overlap
>> serializable transactions which wrote data and then committed in
>> time to be visible to the snapshot being acquired), but it would
>> *also* block *other* serializable transactions, if they were
>> non-read-only, on an attempt to acquire a snapshot.
>
> This seems pretty close to guaranteeing serializability by running
> transactions one at a time (i.e. I don't think it's likely to be
> acceptable from a performance standpoint).

It absolutely makes no sense except for long-running read-only
transactions, and would only be used when explicitly requested; and
like I said, I really don't think I like it even on that basis --
just putting it out there as the only alternative I've found so far
to either tolerating possible serialization anomalies in pg_dump
output (albeit only when compared to the state the database reached
after the dump's snapshot) or waiting indefinitely for a clean
snapshot to become available.

FWIW from a brainstorming perspective, while waiting for problem
transactions to clear so we could get a clean snapshot for the dump
I think it would work even better to block the *commit* of
serializable transactions which *had done* writes than to block
snapshot acquisition for serializable transactions which were not
read-only. Still pretty icky, though. I am loathe to compromise
the "no new blocking" promise of SSI.

[thinks]

Actually, maybe we can reduce the probability of needing to retry
at each iteration of the non-blocking alternative by checking the
conflict information for the problem transactions after they commit.
Any transaction which didn't *actually* generate a read-write
conflict out to a transaction visible to the dump's candidate
snapshot could not cause an anomaly. If none of the problem
transactions actually generates a rw-conflict we can use the
candidate snapshot. Adding that logic to the non-blocking
alternative might actually work pretty well.

There might be some workloads where conflicts would be repeatedly
generated, but there would be a lot where they wouldn't. If we add
a switch to pg_dump to allow users to choose, I think this algorithm
works. It never affects a transaction unless it has explicitly
requested SERIALIZABLE READ ONLY DEFERRABLE, and the only impact is
that startup may be deferred until a snapshot can be acquired which
ensures serializable behavior without worrying about SIRead locks.

-Kevin

In response to

Browse pgsql-hackers by date

  From Date Subject
Next Message Tatsuo Ishii 2010-10-20 15:06:15 Re: How to reliably detect if it's a promoting standby
Previous Message Robert Haas 2010-10-20 15:00:28 Re: max_wal_senders must die