| From: | Ron Johnson <ronljohnsonjr(at)gmail(dot)com> |
|---|---|
| To: | "pgsql-admin(at)lists(dot)postgresql(dot)org" <pgsql-admin(at)lists(dot)postgresql(dot)org> |
| Subject: | Re: Maximum amount of pg_locks |
| Date: | 2026-01-28 21:50:32 |
| Message-ID: | CANzqJaD2ahSpQK6TGdOKyQmh_zBtw6pp_A5Kvik1805-Q4ECTA@mail.gmail.com |
| Views: | Whole Thread | Raw Message | Download mbox | Resend email |
| Thread: | |
| Lists: | pgsql-admin |
On Wed, Jan 28, 2026 at 3:57 PM Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us> wrote:
> Tim Herren <tim(dot)herren(at)protonmail(dot)ch> writes:
> > I'm trying to wrap my head around the way the calculation for the
> maximum amount of locks works in postgres 16.11
> > I already came to the understanding that the maximum amount of locks are
> not on a transaction basis, but rather influenced by the setting
> "max_locks_per_transaction" and the "max_connections".
> > I'm saying influenced rather than calculated because on my server a
> simple multiplication of those two values, set at 512 and 180 respectively
> gives 92160.
> > Yet I regularly observe around 119k locks during my backup using
> "pg_dump -Fc".
>
> Well, the hash table size is indeed set by a calculation of that form,
> but it's max_locks_per_transaction times the number of potential
> locker processes --- not only regular sessions (max_connections),
> but also autovacuum processes and other background workers. And
> prepared transactions, too. lock.c has
>
> #define NLOCKENTS() \
> mul_size(max_locks_per_xact, add_size(MaxBackends,
> max_prepared_xacts))
>
> where postinit.c calculates MaxBackends as:
>
> /* Note that this does not include "auxiliary" processes */
> MaxBackends = MaxConnections + autovacuum_worker_slots +
> max_worker_processes + max_wal_senders +
> NUM_SPECIAL_WORKER_PROCS;
>
> Then on top of that, there's a pretty considerable fudge factor:
> lock.c scales up its shared-memory size request by 10%, and there
> is (from memory) about 100K slop space added on top of the total of
> shared-memory size requests from all modules, and all of that space
> is potentially available for the lock table to overflow into. (Other
> shared data structures could claim it too, but I think the lock table
> is the only one that we don't have a hard upper bound for.)
>
> If you really want to know how many locks can be taken in your
> particular setup, I'd counsel experimenting by deliberately exhausting
> the lock space. (Maybe not during production hours.) The precise
> limit will vary across PG versions, and potentially depend on other
> factors like what extensions you have loaded.
>
I asked a similar question a few years ago, and you said that on modern
systems it's no problem to bump max_locks_per_transaction pretty darned
high (like 1 million). "The default, 64, has historically proven
sufficient" had me slowly incrementing, but after your comment *I added a
zero* to my existing max_locks_per_transaction value (1536, one and a half
KiB) *and haven't thought about it since*.
--
Death to <Redacted>, and butter sauce.
Don't boil me, I'm still alive.
<Redacted> lobster!
| From | Date | Subject | |
|---|---|---|---|
| Next Message | Wasim Devale | 2026-01-29 07:55:14 | Re: Restoration process generates 1.2 TB of WAL files. |
| Previous Message | Tom Lane | 2026-01-28 20:57:34 | Re: Maximum amount of pg_locks |