Re: true serializability and predicate locking

From: Greg Stark <gsstark(at)mit(dot)edu>
To: Kevin Grittner <Kevin(dot)Grittner(at)wicourts(dot)gov>
Cc: Jeff Davis <pgsql(at)j-davis(dot)com>, Albe Laurenz <laurenz(dot)albe(at)wien(dot)gv(dot)at>, pgsql-hackers(at)postgresql(dot)org
Subject: Re: true serializability and predicate locking
Date: 2010-01-07 21:17:34
Message-ID: 407d949e1001071317x5da30b37ya1872be5cce61f8c@mail.gmail.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-hackers

On Thu, Jan 7, 2010 at 8:43 PM, Kevin Grittner
<Kevin(dot)Grittner(at)wicourts(dot)gov> wrote:
> No, it's an attempt to reflect the difference in costs for true
> serializable transactions, so that the optimizer can choose a plan
> appropriate for that mode, versus some other.  In serializable
> transaction isolation there is a higher cost per tuple read, both
> directly in locking and indirectly in increased rollbacks; so why
> lie to the optimizer about it and say it's the same?

This depends how you represent the predicates. If you represent the
predicate by indicating that you might have read any record in the
table -- i.e. a full table lock then you would have very low overhead
per-tuple read, effectively 0. The chances of a serialization failure
would go up but I don't see how to represent that as a planner cost.

But this isn't directly related to the plan in any case. You could do
a full table scan but record in the predicate lock that you were only
interested in records with certain constraints. Or you could do an
index scan but decide to represent the predicate lock as a full table
lock anyways.

--
greg

In response to

Responses

Browse pgsql-hackers by date

  From Date Subject
Next Message Simon Riggs 2010-01-07 21:21:01 Re: 8.5alpha3 hot standby crash report (DatabasePath related?)
Previous Message Andres Freund 2010-01-07 21:16:32 Re: Hot Standy introduced problem with query cancel behavior