Re: SLRU optimization - configurable buffer pool and partitioning the SLRU lock

From: Andrey Borodin <x4mmm(at)yandex-team(dot)ru>
To: Alvaro Herrera <alvherre(at)alvh(dot)no-ip(dot)org>
Cc: Dilip Kumar <dilipbalaut(at)gmail(dot)com>, tender wang <tndrwang(at)gmail(dot)com>, pgsql-hackers mailing list <pgsql-hackers(at)lists(dot)postgresql(dot)org>
Subject: Re: SLRU optimization - configurable buffer pool and partitioning the SLRU lock
Date: 2024-01-26 18:21:18
Message-ID: 7339C2BA-9893-46D3-9236-04D455FC7504@yandex-team.ru
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-hackers

> On 26 Jan 2024, at 22:38, Alvaro Herrera <alvherre(at)alvh(dot)no-ip(dot)org> wrote:
>
> This is OK because in the
> default compilation each file only has 32 segments, so that requires
> only 32 lwlocks held at once while the file is being deleted.

Do we account somehow that different subsystems do not accumulate MAX_SIMUL_LWLOCKS together?
E.g. GiST during split can combine 75 locks, and somehow commit_ts will be deactivated by this backend at the same moment and add 32 locks more :)
I understand that this sounds fantastic, these subsystems do not interfere. But this is fantastic only until something like that actually happens.
If possible, I'd prefer one lock at a time, any maybe sometimes two-three with some guarantees that this is safe.
So, from my POV first solution that you proposed seems much better to me.

Thanks for working on this!

Best regard, Andrey Borodin.

In response to

Browse pgsql-hackers by date

  From Date Subject
Next Message Bernd Helmle 2024-01-26 18:22:28 Re: [PATCH] Add sortsupport for range types and btree_gist
Previous Message Nathan Bossart 2024-01-26 18:14:48 Re: Hide exposed impl detail of wchar.c