Re: [OSSTEST PATCH 0/1] PostgreSQL db: Retry on constraint violation [and 2 more messages] [and 1 more messages]

From: Ian Jackson <ian(dot)jackson(at)eu(dot)citrix(dot)com>
To: Kevin Grittner <kgrittn(at)gmail(dot)com>
Cc: Robert Haas <robertmhaas(at)gmail(dot)com>, Alvaro Herrera <alvherre(at)2ndquadrant(dot)com>, Thomas Munro <thomas(dot)munro(at)enterprisedb(dot)com>, <xen-devel(at)lists(dot)xenproject(dot)org>, "pgsql-hackers(at)postgresql(dot)org" <pgsql-hackers(at)postgresql(dot)org>
Subject: Re: [OSSTEST PATCH 0/1] PostgreSQL db: Retry on constraint violation [and 2 more messages] [and 1 more messages]
Date: 2016-12-15 15:53:53
Message-ID: 22610.48273.860663.838783@mariner.uk.xensource.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-hackers

Kevin Grittner writes ("Re: [HACKERS] [OSSTEST PATCH 0/1] PostgreSQL db: Retry on constraint violation [and 2 more messages] [and 1 more messages]"):
> On Thu, Dec 15, 2016 at 6:09 AM, Ian Jackson <ian(dot)jackson(at)eu(dot)citrix(dot)com> wrote:
> > [...] Are there other reasons,
> > besides previously suppressed serialisation failures, why commit of a
> > transaction that did only reads[1] might fail ?
>
> I'm pretty confident that if you're not using prepared transactions
> the answer is "no". [...] I fear that [for now], if "pre-crash"
> prepared transactions are still open, some of the deductions above
> may not hold.

I think it is reasonable to write in the documentation "if you use
prepared transactions, even read only serialisable transctions might
throw a serialisation failure during commit, and they might do so
after returning data which is not consistent with any global
serialisation".

Prepared transactions are a special purpose feature intended for use
by external transaction management software which I hope could cope
with a requirement to not trust data from a read only transaction
until it had been committed. (Also, frankly, the promise that a
prepared transaction is can be committed successfully with "very high
probability" is not sufficiently precise to be of use when building
robust software at the next layer up.)

> One other situation in which I'm not entirely sure, and it would
> take me some time to review code to be sure, is if
> max_pred_locks_per_transaction is not set high enough to
> accommodate tracking all serializable transactions in allocated RAM
> (recognizing that they must often be tracked after commit, until
> overlapping serializable transactions commit), we have a mechanism
> to summarize some of the committed transactions and spill them to
> disk (using an internal SLRU module). The summarized data might
> not be able to determine all of the above as precisely as the
> "normal" data tracked in RAM. To avoid this, be generous when
> setting max_pred_locks_per_transaction; not only will it avoid this
> summarization, but it will reduce the amount of summarization of
> multiple page locks in the predicate locking system to relation
> locks. Coarser locks increase the "false positive" rate of
> serialization failures, reducing performance.

I don't think "set max_pred_locks_per_transaction generously" is a
practical thing to write in the documentation, because the application
programmer, or admin, has no sensible way to calculate what a
sufficiently generous value is.

You seem to be implying that code relying on the summarised data might
make over-optimistic decisions. That seems dangerous to me, but (with
my very dim view of database innards) I can't immediately see how to
demonstrate that it must in any case be excluded.

But, I think this can only be a problem (that is, it can only cause a
return of un-serialisable results within such a transaction) if, after
such a spill, COMMIT would recalculate the proper answers, in full,
and thus be able to belatedly report the serialisation failure. Is
that the case ?

> > If so presumably it always throws a serialisation failure at that
> > point. I think that is then sufficient. There is no need to tell the
> > application programmer they have to commit even transactions which
> > only read.
>
> Well, if they don't explicitly start a transaction there is no need
> to explicitly commit it, period. [...]

Err, yes, I meant multi-statement transactions. (Or alternatively by
"have to commit" I meant to include the implicit commit of an implicit
transaction.)

> If you can put together a patch to improve the documentation, that
> is always welcome!

Thanks. I hope I will be able to do that. Right now I am still
trying to figure out what guarantees the application programmer can be
offered.

Regards,
Ian.

In response to

Responses

Browse pgsql-hackers by date

  From Date Subject
Next Message Antonin Houska 2016-12-15 16:21:33 Re: Slow I/O can break throttling of base backup
Previous Message Tom Lane 2016-12-15 15:52:16 Re: Proposal : composite type not null constraints