Re: CommandCounterIncrement versus plan caching

From: Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us>
To: pgsql-hackers(at)postgresql(dot)org
Subject: Re: CommandCounterIncrement versus plan caching
Date: 2007-11-30 17:15:24
Message-ID: 3563.1196442924@sss.pgh.pa.us
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-hackers

I wrote:
> One fairly simple answer is to insert a CCI call at the start of
> RevalidateCachedPlan. I dislike that solution, at least by itself,
> on two grounds:
> ...
> I've also thought about rearranging the current conventions for where to
> call CCI. This particular form of the problem would go away if SPI
> command execution did CCI after, instead of before, each non-read-only
> command. Or perhaps safer, before each such command and after the last
> one.

After further thought, I've concluded that the second of these
approaches is the more attractive, because it avoids adding CCI calls
into read-only functions. While I haven't yet tested any of this, the
plan that is in my head is:

1. Add "if (!read_only) CommandCounterIncrement();" at the end of
_SPI_execute_plan(). We keep the "before" call, though, so that a
volatile function still sees the partial results of a calling query;
that's how it's worked historically and I don't want to be tinkering
with those semantics right now.

2. Remove the CCI call at the top of _SPI_prepare_plan. It should be
unnecessary given that we are now expecting any previous DDL to have
been followed by CCI. (If it *isn't* unnecessary, then this whole idea
is wrong, because paths that involve re-using a previously prepared
plan instead of making a new one will still be broken.)

3. Do something to ameliorate the consequences of the increased number
of CCI calls.

As previously mentioned, the main problem with this approach is that
for the typical case of one SQL command per _SPI_execute_plan call,
we'd be doubling the number of CCI calls and thus consuming command
IDs twice as fast. I propose fixing that by not allocating a new
command ID unless the previous ID was actually used to mark a tuple.

Looking at the uses of GetCurrentCommandId, it seems that we can
distinguish "read only" from "read/write" calls easily in many places,
but there is one problem: the Executor uses the passed-in snapshot's
curcid as the CommandId to write tuples with. When we set up a snapshot
we typically don't know whether it will be used with a SELECT or an
updating query, so we cannot decide at that point whether the command ID
has been "dirtied" or not.

I think this can be fixed by changing the Executor so that it doesn't
use snapshot->curcid for this purpose. Instead, add a field to EState
showing the CommandID to mark tuples with. ExecutorStart, which has
enough information to know whether the query is read-only or not,
can set this field, or not, and tell GetCurrentCommandId to mark the
command ID "dirty" (or not). In practice, it appears that all callers
of the Executor pass in snapshots that have current curcid, and so
this would not result in any actual change of the CID being used.
(If a caller did pass in a snap with an older CID, there'd still not
be any real change of behavior --- correct behavior ensues as long
as the executor's output CID is >= snapshot CID.)

One fine point is that we have to mark the ID dirty at ExecutorStart
time, whether or not the query actually ends up marking any tuples with
it; we cannot wait until a heap_insert/update/delete actually happens
with it, as I'd first thought. The problem is that the query might
call a volatile function before it first inserts any tuple, and that
function needs to take a new command ID for itself; if it doesn't
then we can conflate the output of the function with the output of the
calling query later on.

Once we have the knowledge of whether the current command ID is "dirty",
we can skip everything inside CommandCounterIncrement when it is not;
except for the AtStart_Cache() call, ie, AcceptInvalidationMessages().
What that is looking for is asynchronous DDL-change notifications from
other backends. I believe that it is actually not necessary for
correctness for CCI to do that, because we should (had better) have
adequate locking to ensure that messages about any particular table are
absorbed before we touch that table. Rather, the reasoning for having
this in CCI is to make sure we do it often enough in a long-running
transaction to keep the sinval message queue from overflowing. I am
tempted to remove that from CCI and call it from just a selected few CCI
call sites, instead --- maybe only CommitTransactionCommand. OTOH this
step might reasonably be considered too risky for late beta, since it
would affect asychronous backend interactions, which are way harder to
test properly than within-a-backend behavior.

Comments?

regards, tom lane

In response to

Responses

Browse pgsql-hackers by date

  From Date Subject
Next Message Josh Berkus 2007-11-30 17:48:22 Re: Release Note Changes
Previous Message Gregory Stark 2007-11-30 15:49:34 Re: PostGreSQL and recursive queries...