Re: Command Triggers patch v18

From: Robert Haas <robertmhaas(at)gmail(dot)com>
To: Dimitri Fontaine <dimitri(at)2ndquadrant(dot)fr>
Cc: Andres Freund <andres(at)2ndquadrant(dot)com>, pgsql-hackers(at)postgresql(dot)org
Subject: Re: Command Triggers patch v18
Date: 2012-03-29 10:57:16
Message-ID: CA+TgmoaRz1bfv3wEky1JWN6sfrSdsfvTU+FRBggQJG3Cwi0dkQ@mail.gmail.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-hackers

On Wed, Mar 28, 2012 at 3:41 PM, Dimitri Fontaine
<dimitri(at)2ndquadrant(dot)fr> wrote:
> What about :
>
>  create command trigger foo before prepare alter table …
>  create command trigger foo before start of alter table …
>  create command trigger foo before execute alter table …
>  create command trigger foo before end of alter table …
>
>  create command trigger foo when prepare alter table …
>  create command trigger foo when start alter table …
>  create command trigger foo when execute of alter table …
>  create command trigger foo when end of alter table …
>
>  create command trigger foo preceding alter table …
>  create command trigger foo preceding alter table … deferred
>  create command trigger foo preceding alter table … immediate
>
>  create command trigger foo following parser of alter table …
>  create command trigger foo preceding execute of alter table …
>
>  create command trigger foo following alter table …
>
>  create command trigger foo before alter table nowait …
>
> I'm not sure how many hooks we can really export here, but I guess we
> have enough existing keywords to describe when they get to run (parser,
> mapping, lock, check, begin, analyze, access, disable, not exists, do,
> end, escape, extract, fetch, following, search…)

I am sure that we could find a way to beat this with a stick until it
behaves, but I don't really like that idea. It seems to me to be a
case of adding useless parser bloat. Prior to 9.0, when we introduced
the new EXPLAIN syntax, new EXPLAIN options were repeatedly proposed
and partly rejected on the grounds that they weren't important enough
to justify adding new keywords. We've now got EXPLAIN (COSTS,
BUFFERS, TIMING, FORMAT) and there will probably be more in the
future, and the parser overhead of adding a new one is zero. I think
we should learn from that lesson: when you may want to have a bunch of
different options and it's not too clear what they're all going to be
named, designing things in a way that avoids a dependency on the main
parser having the right keywords leads to less patch rejection and
more getting stuff done.

>> I've also had another general thought about safety: we need to make
>> sure that we're only firing triggers from places where it's safe for
>> user code to perform arbitrary actions.  In particular, we have to
>> assume that any triggers we invoke will AcceptInvalidationMessages()
>> and CommandCounterIncrement(); and we probably can't do it at all (or
>> at least not without some additional safeguard) in places where the
>> code does CheckTableNotInUse() up through the point where it's once
>> again safe to access the table, since the trigger might try.  We also
>
> I've been trying to do that already.
>
>> need to think about re-entrancy: that is, in theory, the trigger might
>> execute any other DDL command - e.g. it might drop the table that
>> we've been asked to rename.  I suspect that some of the current BEFORE
>
> That's why I implemented ALTER COMMAND TRIGGER ... SET DISABLE in the
> first place, so that you could run the same command from the trigger
> itself without infinite recursion.
>
>> trigger stuff might fall down on that last one, since the existing
>> code not-unreasonably expects that once it's locked the table, the
>> catalog entries can't go away.  Maybe it all happens to work out OK,
>> but I don't think we can count on that.
>
> I didn't think of DROP TABLE in a command table running BEFORE ALTER
> TABLE, say. That looks evil. It could be documented as yet another way
> to shoot yourself in the foot though?

Well, it depends somewhat on how it fails. If it fails by crashing
the server, for example, I don't think that's going to fly. My
suspicion is that it won't do that, but what it might do is fail in
some pretty odd and unpredictable ways, possibly leading to catalog
corruption, which I don't feel too good about either. Think about not
just ALTER vs. DROP but also ALTER vs. ALTER. It's probably easier to
add guards against this kind of thing than it is to prove that it's
not going to do anything too wacky; the obvious idea that comes to
mind is to bump the command counter after returning from the last
trigger (if that doesn't happen already) and then verify that the
tuple is still present and has whatever other properties we've already
checked and are counting on, and if not chuck an error.

I think that for a first version of this feature it probably would be
smart to trim this back to something that doesn't involve surgery on
the guts of every command; then, we can extend incrementally. Nothing
you've proposed seems impossible to me, but most of the really
interesting things are hard, and it would be much easier to handle
patches intended to cater to more complex use cases if the basic
infrastructure were already committed.

--
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company

In response to

Responses

Browse pgsql-hackers by date

  From Date Subject
Next Message Boszormenyi Zoltan 2012-03-29 10:59:40 Re: ECPG FETCH readahead
Previous Message Robert Haas 2012-03-29 10:38:40 Re: Re: pg_stat_statements normalisation without invasive changes to the parser (was: Next steps on pg_stat_statements normalisation)