Re: Remembering bug #6123

From: Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us>
To: "Kevin Grittner" <Kevin(dot)Grittner(at)wicourts(dot)gov>
Cc: pgsql-hackers(at)postgresql(dot)org
Subject: Re: Remembering bug #6123
Date: 2012-01-13 21:29:11
Message-ID: 18937.1326490151@sss.pgh.pa.us
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-hackers

"Kevin Grittner" <Kevin(dot)Grittner(at)wicourts(dot)gov> writes:
> Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us> wrote:
>> I'm not sure what to do about this. If we throw an error there,
>> there will be no way that the trigger can override the error
>> because it will never get to run. Possibly we could plow ahead
>> with the expectation of throwing an error later if the trigger
>> doesn't cancel the update/delete, but is it safe to do so if we
>> don't hold lock on the tuple? In any case that idea doesn't help
>> with the remaining caller, ExecLockRows.

> I'm still trying to sort through what could be done at the source
> code level, but from a user level I would much rather see an error
> than such surprising and unpredictable behavior.

I don't object to throwing an error by default. What I'm wondering is
what would have to be done to make such updates work safely. AFAICT,
throwing an error in GetTupleForTrigger would preclude any chance of
coding a trigger to make this work, which IMO greatly weakens the
argument that this whole approach is acceptable.

In this particular example, I think it would work just as well to do the
reference-count updates in AFTER triggers, and maybe the short answer
is to tell people they have to do it like that instead of in BEFORE
triggers. However, I wonder what use-case led you to file bug #6123 to
begin with.

regards, tom lane

In response to

Responses

Browse pgsql-hackers by date

  From Date Subject
Next Message Tom Lane 2012-01-13 21:41:37 Re: Remembering bug #6123
Previous Message Kevin Grittner 2012-01-13 21:15:11 Re: Remembering bug #6123