Re: bug in fast-path locking

From: Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us>
To: Robert Haas <robertmhaas(at)gmail(dot)com>
Cc: Boszormenyi Zoltan <zb(at)cybertec(dot)at>, Cousin Marc <cousinmarc(at)gmail(dot)com>, Alvaro Herrera <alvherre(at)commandprompt(dot)com>, Pg Hackers <pgsql-hackers(at)postgresql(dot)org>, Hans-Juergen Schoenig <hs(at)cybertec(dot)at>, Ants Aasma <ants(at)cybertec(dot)at>
Subject: Re: bug in fast-path locking
Date: 2012-04-09 17:49:46
Message-ID: 10311.1333993786@sss.pgh.pa.us
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-hackers

Robert Haas <robertmhaas(at)gmail(dot)com> writes:
> I looked at this more. The above analysis is basically correct, but
> the problem goes a bit beyond an error in LockWaitCancel(). We could
> also crap out with an error before getting as far as LockWaitCancel()
> and have the same problem. I think that a correct statement of the
> problem is this: from the time we bump the strong lock count, up until
> the time we're done acquiring the lock (or give up on acquiring it),
> we need to have an error-cleanup hook in place that will unbump the
> strong lock count if we error out. Once we're done updating the
> shared and local lock tables, the special handling ceases to be
> needed, because any subsequent lock release will go through
> LockRelease() or LockReleaseAll(), which will do the appropriate
> clenaup.

Haven't looked at the code, but maybe it'd be better to not bump the
strong lock count in the first place until the final step of updating
the lock tables?

regards, tom lane

In response to

Responses

Browse pgsql-hackers by date

  From Date Subject
Next Message Tom Lane 2012-04-09 18:04:10 Re: Revisiting extract(epoch from timestamp)
Previous Message Greg Sabino Mullane 2012-04-09 17:47:15 Re: Revisiting extract(epoch from timestamp)