Re: pg_stat_statements locking

From: Julien Rouhaud <rjuju123(at)gmail(dot)com>
To: Andrey Borodin <x4mmm(at)yandex-team(dot)ru>
Cc: Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us>, pgsql-hackers <pgsql-hackers(at)postgresql(dot)org>
Subject: Re: pg_stat_statements locking
Date: 2022-09-13 06:12:55
Message-ID: 20220913061255.hyymv3spaisbqj6h@jrouhaud
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-hackers

On Tue, Sep 13, 2022 at 10:38:13AM +0500, Andrey Borodin wrote:
>
> And the other way is refactoring towards partitioned hashtable, namely
> dshash. But I don't see how partitioned locking can save us from a locking
> disaster. Problem is caused by reading all the pgss view colliding with
> reset() or GC.

If you store the query texts in DSM, you won't have a query text file to
maintain and the GC problem will disappear.

> Both this operations deal with each partition - they will
> conflict anyway, with the same result. Time-consuming read of each partition
> will prevent exclusive lock by reset(), and queued exclusive lock will
> prevent any reads from hashtable.

Conflicts would still be possible, just less likely and less long as the whole
dshash is never locked globally, just one partition at a time (except when the
dshash is resized, but the locks aren't held for a long time and it's not
something frequent).

But the biggest improvements should be gained by reusing the pgstats
infrastructure. I only had a glance at it so I don't know much about it, but
it has a per-backend hashtable to cache some information and avoid too many
accesses on the shared hash table, and a mechanism to accumulate entries and do
batch updates.

In response to

Browse pgsql-hackers by date

  From Date Subject
Next Message Kyotaro Horiguchi 2022-09-13 06:22:37 Re: Error "initial slot snapshot too large" in create replication slot
Previous Message houzj.fnst@fujitsu.com 2022-09-13 05:54:51 RE: why can't a table be part of the same publication as its schema