Robert Haas <robertmhaas(at)gmail(dot)com> wrote:
> On Wed, Jul 20, 2011 at 2:58 PM, Kevin Grittner
> <Kevin(dot)Grittner(at)wicourts(dot)gov> wrote:
>> So basically, the goal of this patch is to ensure that a BEFORE
>> DELETE trigger doesn't silently fail to delete a row because that
>> row was updated during the BEFORE DELETE trigger firing (in our
>> case by secondary trigger firing).
> I've run across this problem before while writing application code
> and have found it surprising, unfortunate, and difficult to work
> around. It's not so bad when it only bites you once, but as things
> get more complicated and you have more and more triggers floating
> around, it gets pretty darn horrible. One of the things I've done
> to mitigate the impact of this is to write as many triggers as
> possible as AFTER triggers
Yeah, this is not an issue in AFTER triggers, so moving any updating
to those is a solution. In most cases that's where you want
triggered modifications to take place anyway. The cascading delete
situation is the most obvious exception, although there certainly
could be others.
> but that's sometimes difficult to accomplish.
> Your scenario is a BEFORE DELETE trigger that does an UPDATE on
> the same row, but I think this problem also occurs if you have a
> BEFORE UPDATE trigger that does an UPDATE on the same row. I
> believe the second update gets silently ignored.
My testing shows that the primary update gets ignored, while all the
triggered effects of that update are persisted. Yuck. :-( It
certainly seems possible to turn that around, but that hardly seems
better. In asking application programmers here what they would
*expect* to happen, they all seem to think that it is surprising
that the BEFORE trigger functions *return a record*, rather than a
boolean to say whether to proceed with the operation. They feel it
would be less confusing if a value set into NEW was effective if the
operation does take effect, and the boolean controls whether or not
that happens. They rather expect that if an update happens from the
same transaction while a before trigger is running, that the NEW
record will reflect the change.
I recognize how hard it would be to create that expected behavior,
and how unlikely it is that the community would accept such a change
at this point. But current behavior is to silently do something
really dumb, so I think some change should be considered -- even if
that change is to throw an error where we now allow nonsense.
INSERT is not a problem -- if a BEFORE INSERT trigger inserts a row
with a conflicting primary key (or other unique index key), the
operation will be rolled back. That's fine.
I think DELETE can be cleanly fixed with a patch similar to what I
posted earlier in the thread. I found one more value that looks
like it should be set, and it could use some comments, but I believe
that we can get DELETE behavior which is every bit as sensible as
INSERT behavior with a very small change.
The worms do come crawling out of the can on BEFORE UPDATE triggers,
though. When faced with an UPDATE which hasn't yet been applied,
and other UPDATEs triggering from within the BEFORE UPDATE trigger
which touch the same row, it doesn't seem like you can honor both
the original UPDATE which was requested *and* the triggered UPDATEs.
Of course, if you discard the original UPDATE, you can't very well
do anything with the record returned from the BEFORE UPDATE trigger
for that update. Since it seems equally evil to allow the update
while ignoring some of the work caused by its trigger functions as
to show the work of the triggered updates while suppressing the
original update, I think the right thing is to throw an error if the
old row for a BEFORE UPDATE is updated by the same transaction and
the trigger function ultimately returns a non-NULL value.
In response to
pgsql-hackers by date
|Next:||From: Greg Smith||Date: 2011-07-22 21:15:37|
|Subject: Re: pgbench --unlogged-tables|
|Previous:||From: Robert Haas||Date: 2011-07-22 20:40:37|
|Subject: pgbench --unlogged-tables|