Re: Speed up transaction completion faster after many relations are accessed in a transaction

From: David Rowley <david(dot)rowley(at)2ndquadrant(dot)com>
To: Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us>
Cc: "Tsunakawa, Takayuki" <tsunakawa(dot)takay(at)jp(dot)fujitsu(dot)com>, Peter Eisentraut <peter(dot)eisentraut(at)2ndquadrant(dot)com>, Amit Langote <Langote_Amit_f8(at)lab(dot)ntt(dot)co(dot)jp>, "Imai, Yoshikazu" <imai(dot)yoshikazu(at)jp(dot)fujitsu(dot)com>, Andres Freund <andres(at)anarazel(dot)de>, Simon Riggs <simon(at)2ndquadrant(dot)com>, "pgsql-hackers(at)lists(dot)postgresql(dot)org" <pgsql-hackers(at)lists(dot)postgresql(dot)org>
Subject: Re: Speed up transaction completion faster after many relations are accessed in a transaction
Date: 2019-08-14 07:25:10
Message-ID: CAKJS1f8wHjmu_tALMZpqOktKFsAsKneS6aOvkVRV2ezcvYBCSw@mail.gmail.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-hackers

On Thu, 25 Jul 2019 at 05:49, Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us> wrote:
> On the whole, I don't especially like this approach, because of the
> confusion between peak lock count and end-of-xact lock count. That
> seems way too likely to cause problems.

Thanks for having a look at this. I've not addressed the points
you've mentioned due to what you mention above. The only way I can
think of so far to resolve that would be to add something to track
peak lock usage. The best I can think of to do that, short of adding
something to dynahash.c is to check how many locks are held each time
we obtain a lock, then if that count is higher than the previous time
we checked, then update the maximum locks held, (probably a global
variable). That seems pretty horrible to me and adds overhead each
time we obtain a lock, which is a pretty performance-critical path.

I've not tested what Andres mentioned about simplehash instead of
dynahash yet. I did a quick scan of simplehash and it looked like
SH_START_ITERATE would suffer the same problems as dynahash's
hash_seq_search(), albeit, perhaps with some more efficient memory
lookups. i.e it still has to skip over empty buckets, which might be
costly in a bloated table.

For now, I'm out of ideas. If anyone else feels like suggesting
something of picking this up, feel free.

--
David Rowley http://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Training & Services

In response to

Responses

Browse pgsql-hackers by date

  From Date Subject
Next Message Etsuro Fujita 2019-08-14 07:28:32 Re: progress report for ANALYZE
Previous Message Andres Freund 2019-08-14 06:57:45 Re: POC: Cleaning up orphaned files using undo logs