Re: BUG #12330: ACID is broken for unique constraints

From: Kevin Grittner <kgrittn(at)ymail(dot)com>
To: "nikita(dot)y(dot)volkov(at)mail(dot)ru" <nikita(dot)y(dot)volkov(at)mail(dot)ru>, "pgsql-hackers(at)postgresql(dot)org" <pgsql-hackers(at)postgresql(dot)org>
Subject: Re: BUG #12330: ACID is broken for unique constraints
Date: 2014-12-26 15:23:37
Message-ID: 969937206.892402.1419607417808.JavaMail.yahoo@jws10072.mail.ne1.yahoo.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-bugs pgsql-hackers

"nikita(dot)y(dot)volkov(at)mail(dot)ru" <nikita(dot)y(dot)volkov(at)mail(dot)ru> wrote:

> Executing concurrent transactions inserting the same value of a
> unique key fails with the "duplicate key" error under code
> "23505" instead of any of transaction conflict errors with a
> "40***" code.

This is true, and can certainly be inconvenient when using
serializable transactions to simplify handling of race conditions,
because you can't just test for a SQLSTATE of '40001' or '40P01' to
indicate the need to retry the transaction. You have two
reasonable ways to avoid duplicate keys if the values are synthetic
and automatically generated. One is to use a SEQUENCE object to
generate the values. The other (really only recommended if gaps in
the sequence are a problem) is to have the serializable transaction
update a row to "claim" the number.

Otherwise you need to consider errors related to duplicates as
possibly being caused by a concurrent transaction. You may want to
do one transaction retry in such cases, and fail if an identical
error is seen. Keep in mind that these errors do not allow
serialization anomalies to appear in the committed data, so are
arguably not violations of ACID principles -- more of a wart on the
otherwise clean technique of using serializable transactions to
simplify application programming under concurrency.

Thinking about it just now I think we might be able to generate a
write conflict instead of a duplicate key error for this case by
checking the visibility information for the duplicate row. It
might not even have a significant performance impact, since we need
to check visibility information to generate the duplicate key
error. That would still leave similar issues (where similar
arguments can be made) relating to foreign keys; but those can
largely be addressed already by declaring the constraints to be
DEFERRED -- and anyway, that would be a separate fix.

I'm moving this discussion to the -hackers list so that I can ask
other developers:

Are there any objections to generating a write conflict instead of
a duplicate key error if the duplicate key was added by a
concurrent transaction? Only for transactions at isolation level
REPEATABLE READ or higher?

--
Kevin Grittner
EDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company

In response to

Responses

Browse pgsql-bugs by date

  From Date Subject
Next Message Tom Lane 2014-12-26 15:39:52 Re: BUG #12344: libcurses issue with psql binary of Solaris package
Previous Message egoitz 2014-12-26 10:16:35 BUG #12344: libcurses issue with psql binary of Solaris package

Browse pgsql-hackers by date

  From Date Subject
Next Message Tom Lane 2014-12-26 15:32:00 Re: Some other odd buildfarm failures
Previous Message David Rowley 2014-12-26 12:57:33 Re: Using 128-bit integers for sum, avg and statistics aggregates