Re: Maximum amount of pg_locks

From: Tim Herren <tim(dot)herren(at)protonmail(dot)ch>
To: Ron Johnson <ronljohnsonjr(at)gmail(dot)com>
Cc: "pgsql-admin(at)lists(dot)postgresql(dot)org" <pgsql-admin(at)lists(dot)postgresql(dot)org>
Subject: Re: Maximum amount of pg_locks
Date: 2026-01-29 09:50:39
Message-ID: 2O894VNQ7OUeYU_VfQi_wCmycgiJaXLqJqlc_Lo6I9B0oc0s4i-DC8P-1opSAnlUeBufY_e_yavJYy38VuxqADkNBxQdVtxLqBi4qn7Olrk=@protonmail.ch
Views: Whole Thread | Raw Message | Download mbox | Resend email
Thread:
Lists: pgsql-admin

On Wednesday, January 28th, 2026 at 10:51 PM, Ron Johnson <ronljohnsonjr(at)gmail(dot)com> wrote:

> On Wed, Jan 28, 2026 at 3:57 PM Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us> wrote:
>
>> Tim Herren <tim(dot)herren(at)protonmail(dot)ch> writes:
>>> I'm trying to wrap my head around the way the calculation for the maximum amount of locks works in postgres 16.11
>>> I already came to the understanding that the maximum amount of locks are not on a transaction basis, but rather influenced by the setting "max_locks_per_transaction" and the "max_connections".
>>> I'm saying influenced rather than calculated because on my server a simple multiplication of those two values, set at 512 and 180 respectively gives 92160.
>>> Yet I regularly observe around 119k locks during my backup using "pg_dump -Fc".
>>
>> Well, the hash table size is indeed set by a calculation of that form,
>> but it's max_locks_per_transaction times the number of potential
>> locker processes --- not only regular sessions (max_connections),
>> but also autovacuum processes and other background workers. And
>> prepared transactions, too. lock.c has
>>
>> #define NLOCKENTS() \
>> mul_size(max_locks_per_xact, add_size(MaxBackends, max_prepared_xacts))
>>
>> where postinit.c calculates MaxBackends as:
>>
>> /* Note that this does not include "auxiliary" processes */
>> MaxBackends = MaxConnections + autovacuum_worker_slots +
>> max_worker_processes + max_wal_senders + NUM_SPECIAL_WORKER_PROCS;
>>
>> Then on top of that, there's a pretty considerable fudge factor:
>> lock.c scales up its shared-memory size request by 10%, and there
>> is (from memory) about 100K slop space added on top of the total of
>> shared-memory size requests from all modules, and all of that space
>> is potentially available for the lock table to overflow into. (Other
>> shared data structures could claim it too, but I think the lock table
>> is the only one that we don't have a hard upper bound for.)
>>
>> If you really want to know how many locks can be taken in your
>> particular setup, I'd counsel experimenting by deliberately exhausting
>> the lock space. (Maybe not during production hours.) The precise
>> limit will vary across PG versions, and potentially depend on other
>> factors like what extensions you have loaded.
>
> I asked a similar question a few years ago, and you said that on modern systems it's no problem to bump max_locks_per_transaction pretty darned high (like 1 million). "The default, 64, has historically proven sufficient" had me slowly incrementing, but after your comment I added a zero to my existing max_locks_per_transaction value (1536, one and a half KiB) and haven't thought about it since.
> --
>
> Death to <Redacted>, and butter sauce.
> Don't boil me, I'm still alive.
>
> <Redacted> lobster!

Thank your for your explanation and the information from an actual production server.
I'll do some more reading and then adjust my monitoring.

Thanks again for the valuable input, have a nice day.
Tim

In response to

Browse pgsql-admin by date

  From Date Subject
Next Message SASIKUMAR Devaraj 2026-01-29 10:09:05 Re: Sybase to postgres Timestamp column
Previous Message Wasim Devale 2026-01-29 07:55:14 Re: Restoration process generates 1.2 TB of WAL files.