Re: WIP: Deferrable unique constraints

From: Dean Rasheed <dean(dot)a(dot)rasheed(at)googlemail(dot)com>
To: Alvaro Herrera <alvherre(at)commandprompt(dot)com>
Cc: Jeff Davis <pgsql(at)j-davis(dot)com>, pgsql-hackers(at)postgresql(dot)org
Subject: Re: WIP: Deferrable unique constraints
Date: 2009-07-14 19:32:53
Message-ID: 8e2dbb700907141232k51818f7bm83ed0515bf96da27@mail.gmail.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-hackers

2009/7/14 Alvaro Herrera <alvherre(at)commandprompt(dot)com>:
> Jeff Davis wrote:
>
>> The only problem there is telling the btree AM whether or not to do the
>> insert or not (i.e. fake versus real insert). Perhaps you can just do
>> that with careful use of a global variable?
>>
>> Sure, all of this is a little ugly, but we've already acknowledged that
>> there is some ugliness around the existing unique constraint and the
>> btree code that supports it (for one, the btree AM accesses the heap).
>

Well the ugliness referred to here (btree accessing the heap) seems
like a necessary evil. I don't think I want to add to it by
introducing global variables.

> My 2c on this issue: if this is ugly (and it is) and needs revisiting to
> extend it, please by all means let's make it not ugly instead of moving
> the ugliness around.  I didn't read the original proposal in detail so
> IMBFOS, but it doesn't seem like using our existing deferred constraints
> to handle uniqueness checks unuglifies this code enough ...  For example
> I think we'd like to support stuff like "UPDATE ... SET a = -a" where
> the table is large.
>

This patch works OK for around 1M rows. 10M is a real stretch (for me
it took around 1.7GB of backend memory). Any larger than that is not
going to be feasible. There is a separate TODO item to tackle this
scalability limit for deferred triggers, and I'd like to tackle that
in a separate patch. I think more discussion needs to be had on ways
to fix this (and hopefully unuglify that code in the process).

ITSM that it is not simply a matter of spooling the current queues to
disk. There is code in there which scans whole queues shuffling things
around. So perhaps a queue per trigger would help optimise this,
allowing us to move a whole queue cheaply, or drop it in favour of a
bulk check. I've not thought it through much more than that so far.

- Dean

In response to

Responses

Browse pgsql-hackers by date

  From Date Subject
Next Message Alvaro Herrera 2009-07-14 19:33:10 Re: WIP: Deferrable unique constraints
Previous Message Heikki Linnakangas 2009-07-14 19:31:33 Re: Merge Append Patch merged up to 85devel