From: | David Rowley <dgrowleyml(at)gmail(dot)com> |
---|---|
To: | James Pang <jamespang886(at)gmail(dot)com> |
Cc: | pgsql-hackers(at)lists(dot)postgresql(dot)org |
Subject: | Re: max_locks_per_transaction v18 |
Date: | 2025-08-18 06:58:23 |
Message-ID: | CAApHDvqnZhZ3C6qZZMDxN6W9y7x_xDwbLhZoQU3QL1WBgBk4Ew@mail.gmail.com |
Views: | Whole Thread | Raw Message | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-hackers |
On Mon, 18 Aug 2025 at 18:23, James Pang <jamespang886(at)gmail(dot)com> wrote:
> not tested and any regression found either, with 10k connections, and "max_locks_per_transaction=128", it need about more than 1GB extra memory,right? per my understanding, max_locks_per_transaction is the max locked objects in a transaction (that's not an average locked objects at the same time among all connections), but for past-path-lock slots, the memory will be allocated based on this parameter after client connection established, right? so, even no so many fast lock slots needed, for 10k connections, extra memory got allocated there. We may test that in our environment, and update then if anything found.
Can you share how you came to 1GB extra?
By my calculations, I believe it's an extra 5625 kB total for the
entire instance.
select pg_size_pretty((max_locks_per_xact / 16 * 8 +
max_locks_per_xact / 16 * 4 * 16) * connections::numeric) from
(values(128,10000)) v(max_locks_per_xact, connections);
David
From | Date | Subject | |
---|---|---|---|
Next Message | zengman | 2025-08-18 07:01:41 | Re: When deleting the plpgsql function, release the CachedPlan of the function |
Previous Message | Nisha Moond | 2025-08-18 06:56:35 | Re: Parallel Apply |