Re: RI triggers and schemas

From: Alex Hayward <xelah(at)xelah(dot)com>
To: PostgresSQL Hackers Mailing List <pgsql-hackers(at)postgresql(dot)org>
Subject: Re: RI triggers and schemas
Date: 2002-03-27 13:51:28
Message-ID: Pine.LNX.4.33.0203271255110.6431-100000@sphinx.mythic-beasts.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-hackers

On Tue, 26 Mar 2002, Stephan Szabo wrote:

>
> On Tue, 26 Mar 2002, Jan Wieck wrote:
>
> > Tom Lane wrote:
> > > I think the existing scheme of generating the plan during first use
> > > in a particular backend is fine. At least as long as we're sticking
> > > with standard plans at all ... IIRC Stephan was wondering about
> > > bypassing the whole parse/plan mechanism in favor of heap-access-level
> > > operations.
> >
> > I don't know if using heap-access directly in the RI triggers
> > is such a good idea.
> >
> > It is guaranteed that there is a unique key covering all the
> > referenced columns (and only them). I'm not sure though if it
> > has to be in the same column order as the reference. Nor do I
> > think that matters other than making the creation of the
> > scankey a bit more difficult.
> >
> > But there could be no, some, a full matching index, maybe one
> > with extra columns at the end on the foreign key. So for the
> > referential action, the entire process of deciding which
> > index fits best, pushing some of the qualification into the
> > index scankey, and do the rest on the heap tuples, has to be
> > duplicated here.
>
> That is the problem that I've run into in working on doing it. I'm
> still trying to figure out what levels of that code can be used.
>
> The advantage that I see is that we get more control over the time
> qualifications used for tuples which may come into play for match
> partial. I'm not sure that it's worth the effort to try doing it
> this way, but I figured I'd try it.

Another thing to bear in mind:

We (www.seatbooker.net, that is) have had a certain amount of trouble with
contention for locks taken out by RI triggers - in particular triggers on
FK tables where large numbers of rows refer to a small number of rows in
the PK table.

Having had a look at it one possible solution seemed to be to do two
special queries in these triggers. Whilst checking and UPDATE/INSERT on
the FK table do a special 'SELECT ... FOR UPDATE BARRIER' query which is
exactly like 'SELECT ... FOR UPDATE', including waiting for transactions
with the row marked for update to commit/rollback, except that it doesn't
actually mark the row for update afterwards. Whilst checking DELETE/UPDATE
on the PK table do a 'SELECT ... FOR UPDATE INCLUDE UNCOMMITTED LIMIT 1'
(or 'SELECT ... FOR UPDATE READ UNCOMMITTED if READ UNCOMMITTED can be
made to work) which would do everything which SELECT .. FOR UPDATE does
but also wait for any transactions with matching uncommitted rows to
complete before returning.

If the RI triggers could have more direct control over the time
qualifications then this could be implemented without the need for these
two queries....which after all are a bit of a hack.

Hmm, come to think of it the check which is triggered on the PK update
probably doesn't need to mark anything for update at all - it might work
with just an update barrier that could include uncommitted rows and return
a 'matching rows existed' vs 'no matching rows existed' status. Perhaps
this would help eliminate the possibility of deadlocking whilst checking
the two constraints simultaeneously for concurrent updates, too...

Unfortionately, whilst I've managed to write a (seemingly, anyway) working
'SELECT .. FOR UPDATE BARRIER' I haven't really got much time to work on
this any more. Comments on it wouldn't go amiss, though.

In response to

Browse pgsql-hackers by date

  From Date Subject
Next Message Bruce Momjian 2002-03-27 14:45:12 Re: Mailing List Question
Previous Message Marc G. Fournier 2002-03-27 13:12:03 Re: Mailing List Question