Re: Speed up transaction completion faster after many relations are accessed in a transaction

From: Tomas Vondra <tomas(dot)vondra(at)2ndquadrant(dot)com>
To: David Rowley <david(dot)rowley(at)2ndquadrant(dot)com>
Cc: Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us>, "Tsunakawa, Takayuki" <tsunakawa(dot)takay(at)jp(dot)fujitsu(dot)com>, Peter Eisentraut <peter(dot)eisentraut(at)2ndquadrant(dot)com>, Amit Langote <Langote_Amit_f8(at)lab(dot)ntt(dot)co(dot)jp>, "Imai, Yoshikazu" <imai(dot)yoshikazu(at)jp(dot)fujitsu(dot)com>, Andres Freund <andres(at)anarazel(dot)de>, Simon Riggs <simon(at)2ndquadrant(dot)com>, "pgsql-hackers(at)lists(dot)postgresql(dot)org" <pgsql-hackers(at)lists(dot)postgresql(dot)org>
Subject: Re: Speed up transaction completion faster after many relations are accessed in a transaction
Date: 2019-08-15 22:30:59
Message-ID: 20190815223059.fkjsi3oywa4a2jtk@development
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-hackers

On Wed, Aug 14, 2019 at 07:25:10PM +1200, David Rowley wrote:
>On Thu, 25 Jul 2019 at 05:49, Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us> wrote:
>> On the whole, I don't especially like this approach, because of the
>> confusion between peak lock count and end-of-xact lock count. That
>> seems way too likely to cause problems.
>
>Thanks for having a look at this. I've not addressed the points
>you've mentioned due to what you mention above. The only way I can
>think of so far to resolve that would be to add something to track
>peak lock usage. The best I can think of to do that, short of adding
>something to dynahash.c is to check how many locks are held each time
>we obtain a lock, then if that count is higher than the previous time
>we checked, then update the maximum locks held, (probably a global
>variable). That seems pretty horrible to me and adds overhead each
>time we obtain a lock, which is a pretty performance-critical path.
>

Would it really be a measurable overhead? I mean, we only really need
one int counter, and you don't need to do the check on every lock
acquisition - you just need to recheck on the first lock release. But
maybe I'm underestimating how expensive it is ...

Talking about dynahash - doesn't it already track this information?
Maybe not directly but surely it has to track the number of entries in
the hash table, in order to compute fill factor. Can't we piggy-back on
that and track the highest fill-factor for a particular period of time?

regards

--
Tomas Vondra http://www.2ndQuadrant.com
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services

In response to

Browse pgsql-hackers by date

  From Date Subject
Next Message Tomas Vondra 2019-08-15 22:37:48 Re: BF failure: could not open relation with OID XXXX while querying pg_views
Previous Message Tomas Vondra 2019-08-15 20:53:51 Re: Extension development