Re: pg_upgrade failing for 200+ million Large Objects

From: Andrew Dunstan <andrew(at)dunslane(dot)net>
To: Jan Wieck <jan(at)wi3ck(dot)info>, Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us>, Magnus Hagander <magnus(at)hagander(dot)net>
Cc: Robins Tharakan <tharakan(at)gmail(dot)com>, Peter Eisentraut <peter(dot)eisentraut(at)enterprisedb(dot)com>, "pgsql-hackers(at)postgresql(dot)org" <pgsql-hackers(at)postgresql(dot)org>
Subject: Re: pg_upgrade failing for 200+ million Large Objects
Date: 2021-03-20 15:17:41
Message-ID: b9aca8be-c579-0206-b2a6-8579b8027090@dunslane.net
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-hackers


On 3/20/21 12:39 AM, Jan Wieck wrote:
> On 3/8/21 11:58 AM, Tom Lane wrote:
>> The answer up to now has been "raise max_locks_per_transaction enough
>> so you don't see the failure".  Having now consumed a little more
>> caffeine, I remember that that works in pg_upgrade scenarios too,
>> since the user can fiddle with the target cluster's postgresql.conf
>> before starting pg_upgrade.
>>
>> So it seems like the path of least resistance is
>>
>> (a) make pg_upgrade use --single-transaction when calling pg_restore
>>
>> (b) document (better) how to get around too-many-locks failures.
>
> That would first require to fix how pg_upgrade is creating the
> databases. It uses "pg_restore --create", which is mutually exclusive
> with --single-transaction because we cannot create a database inside
> of a transaction. On the way pg_upgrade also mangles the
> pg_database.datdba (all databases are owned by postgres after an
> upgrade; will submit a separate patch for that as I consider that a
> bug by itself).
>
> All that aside, the entire approach doesn't scale.
>
> In a hacked up pg_upgrade that does "createdb" first before calling
> pg_upgrade with --single-transaction. I can upgrade 1M large objects with
>     max_locks_per_transaction = 5300
>     max_connectinons=100
> which contradicts the docs. Need to find out where that math went off
> the rails because that config should only have room for 530,000 locks,
> not 1M. The same test fails with max_locks_per_transaction = 5200.
>
> But this would mean that one has to modify the postgresql.conf to
> something like 530,000 max_locks_per_transaction at 100
> max_connections in order to actually run a successful upgrade of 100M
> large objects. This config requires 26GB of memory just for locks. Add
> to that the memory pg_restore needs to load the entire TOC before even
> restoring a single object.
>
> Not going to work. But tests are still ongoing ...

I thought Tom's suggestion upthread:

> Would it be sane to have the backend not bother to
> take any locks in binary-upgrade mode?

was interesting. Could we do that on the restore side? After all, what
are we locking against in binary upgrade mode?

cheers

andrew

--
Andrew Dunstan
EDB: https://www.enterprisedb.com

In response to

Browse pgsql-hackers by date

  From Date Subject
Next Message Justin Pryzby 2021-03-20 15:21:46 Re: [HACKERS] Custom compression methods
Previous Message Amit Kapila 2021-03-20 15:14:23 Re: Logical Replication vs. 2PC