Re: WIP: Deferrable unique constraints

From: Jeff Davis <pgsql(at)j-davis(dot)com>
To: Dean Rasheed <dean(dot)a(dot)rasheed(at)googlemail(dot)com>
Cc: pgsql-hackers(at)postgresql(dot)org
Subject: Re: WIP: Deferrable unique constraints
Date: 2009-07-14 16:56:48
Message-ID: 1247590608.13173.29.camel@jdavis
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-hackers

On Sun, 2009-07-12 at 14:14 +0100, Dean Rasheed wrote:
> Here is an updated version of this patch which should apply to HEAD,
> with updated docs, regression tests, pg_dump and psql \d.
>
> It works well for small numbers of temporary uniqueness violations,
> and at least as well (in fact about twice as fast) as deferred FK
> checks for large numbers of deferred checks.

I took a brief look at this. You're extending the index AM, and that
might not be necessary. It might be fine, but usually there is a lot of
discussion around the changing of important APIs, so it might be worth
looking at alternatives.

With the patch I'm working on for generalized index constraints, there
would be no need to extend the index AM. However, I don't expect my
mechanism to replace the existing unique btree constraints, because I
would expect the existing unique constraints to be faster (I haven't
tested yet, though).

Perhaps we could instead use the TRY/CATCH mechanism. It's generally
difficult to figure out from the code exactly what happened, but in this
case we have the error code ERRCODE_UNIQUE_VIOLATION. So, just check for
that error code rather than passing back a boolean. You might want to
change the signature of _bt_check_unique() so that it doesn't have to
raise the error inside, and you can raise the error from _bt_doinsert().

The only problem there is telling the btree AM whether or not to do the
insert or not (i.e. fake versus real insert). Perhaps you can just do
that with careful use of a global variable?

Sure, all of this is a little ugly, but we've already acknowledged that
there is some ugliness around the existing unique constraint and the
btree code that supports it (for one, the btree AM accesses the heap).

> I propose trying to improve performance and scalability for large
> numbers of deferred checks in a separate patch.

Would it be possible to just check how long the list of potential
conflicts is growing, and if it gets to big, just replace them all with
a "bulk check" event?

Regards,
Jeff Davis

In response to

Responses

Browse pgsql-hackers by date

  From Date Subject
Next Message Jaime Casanova 2009-07-14 17:19:38 Re: [PATCH] "could not reattach to shared memory" on Windows
Previous Message Andrew Dunstan 2009-07-14 16:31:34 Re: COPY WITH CSV FORCE QUOTE *