Re: pg_upgrade failing for 200+ million Large Objects

From: Peter Eisentraut <peter(dot)eisentraut(at)enterprisedb(dot)com>
To: "Tharakan, Robins" <tharar(at)amazon(dot)com>, "pgsql-hackers(at)postgresql(dot)org" <pgsql-hackers(at)postgresql(dot)org>
Subject: Re: pg_upgrade failing for 200+ million Large Objects
Date: 2021-03-08 10:25:13
Message-ID: cc089cc3-fc43-9904-fdba-d830d8222145@enterprisedb.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-hackers

On 07.03.21 09:43, Tharakan, Robins wrote:
> Attached is a proof-of-concept patch that allows Postgres to perform
> pg_upgrade if the instance has Millions of objects.
>
> It would be great if someone could take a look and see if this patch is in
> the right direction. There are some pending tasks (such as documentation /
> pg_resetxlog vs pg_resetwal related changes) but for now, the patch helps
> remove a stalemate where if a Postgres instance has a large number
> (accurately speaking 146+ Million) of Large Objects, pg_upgrade fails. This
> is easily reproducible and besides deleting Large Objects before upgrade,
> there is no other (apparent) way for pg_upgrade to complete.
>
> The patch (attached):
> - Applies cleanly on REL9_6_STABLE -
> c7a4fc3dd001646d5938687ad59ab84545d5d043
> - 'make check' passes
> - Allows the user to provide a constant via pg_upgrade command-line, that
> overrides the 2 billion constant in pg_resetxlog [1] thereby increasing the
> (window of) Transaction IDs available for pg_upgrade to complete.

Could you explain what your analysis of the problem is and why this
patch (might) fix it?

Right now, all I see here is, pass a big number via a command-line
option and hope it works.

In response to

Browse pgsql-hackers by date

  From Date Subject
Next Message Fabien COELHO 2021-03-08 10:32:42 Re: Using COPY FREEZE in pgbench
Previous Message Greg Nancarrow 2021-03-08 10:24:09 Re: Parallel INSERT (INTO ... SELECT ...)