Re: shared memory size during upgrade pgsql with partitions (max_locks_per_transaction)

From: Justin Pryzby <pryzby(at)telsasoft(dot)com>
To: Piotr Włodarczyk <piotr(dot)wlodarczyk(at)gnb(dot)pl>
Cc: pgsql-performance(at)lists(dot)postgresql(dot)org
Subject: Re: shared memory size during upgrade pgsql with partitions (max_locks_per_transaction)
Date: 2019-12-18 04:01:19
Message-ID: 20191218040119.GE2082@telsasoft.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-performance

On Tue, Dec 17, 2019 at 08:03:41PM +0000, Piotr Włodarczyk wrote:
> Currently we're working on PSQL 11.5 and we're trying upgrade to 12.1.
>
> During that we have a problem:
>
> command: "/usr/pgsql-12/bin/pg_dump" --host /cluster/postgresql --port 50432
> --username postgres --schema-only --quote-all-identifiers --binary-upgrade
> --format=custom --file="pg_upgrade_dump_281535902.custom" 'dbname=sprint'
> >> "pg_upgrade_dump_281535902.log" 2>&1
> pg_dump: error: query failed: ERROR: out of shared memory
> HINT: You might need to increase max_locks_per_transaction.
> pg_dump: error: query was: LOCK TABLE
> "some_schemaa"."table_part_80000000_2018q3" IN ACCESS SHARE MODE
>
> On current instance we have about one thousand of partitions, partitioned in
> two levels: first by id_product, and second level by quarter of the year, as
> you can see on above log.
>
> How have we to calculate shared memory, and (eventually
> max_locks_per_transaction) to be fit to the limits during upgrade?

Great question. Clearly, if you can run that (or similar) pg_dump command,
then you can pg_upgrade. I think you could also do pg_upgrade --check,

The query looks like
FROM pg_class c...
WHERE c.relkind in ('%c', '%c', '%c', '%c', '%c', '%c', '%c') "

..and then does:

if (tblinfo[i].dobj.dump &&
(tblinfo[i].relkind == RELKIND_RELATION ||
tblinfo->relkind == RELKIND_PARTITIONED_TABLE) &&
(tblinfo[i].dobj.dump & DUMP_COMPONENTS_REQUIRING_LOCK))
{
resetPQExpBuffer(query);
appendPQExpBuffer(query,
"LOCK TABLE %s IN ACCESS SHARE MODE",
fmtQualifiedDumpable(&tblinfo[i]));
ExecuteSqlStatement(fout, query->data);
}

..then filters by -N/-n/-t/-T (which doesn't apply to pg_upgrade):
selectDumpableTable(&tblinfo[i], fout);

So it looks like COUNT(1) FROM pg_class WHERE relkind IN ('r','p') should do it.

But actually, during pg_upgrade, since nothing else is running, you actually
have max_connections*max_locks_per_transaction total locks.

Said differently, I think you could set max_locks_per_transaction to:
SELECT (SELECT COUNT(1) FROM pg_class WHERE relkind IN ('r','p'))/current_setting('max_connections')::int;

..probably with a fudge factor of +10 for any system process (and due to
integer truncation).

Someone might say that pg_upgrade or pg_dump could check for that specifically..

Justin

In response to

Browse pgsql-performance by date

  From Date Subject
Next Message Piotr Włodarczyk 2019-12-18 09:16:11 Re: shared memory size during upgrade pgsql with partitions (max_locks_per_transaction)
Previous Message Kaijiang Chen 2019-12-18 03:25:32 Fwd: weird long time query