Re: pg_upgrade failing for 200+ million Large Objects

From: Jacob Champion <jchampion(at)timescale(dot)com>
To: Nathan Bossart <nathandbossart(at)gmail(dot)com>
Cc: Jan Wieck <jan(at)wi3ck(dot)info>, Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us>, Bruce Momjian <bruce(at)momjian(dot)us>, Zhihong Yu <zyu(at)yugabyte(dot)com>, Andrew Dunstan <andrew(at)dunslane(dot)net>, Magnus Hagander <magnus(at)hagander(dot)net>, Robins Tharakan <tharakan(at)gmail(dot)com>, Peter Eisentraut <peter(dot)eisentraut(at)enterprisedb(dot)com>, "pgsql-hackers(at)postgresql(dot)org" <pgsql-hackers(at)postgresql(dot)org>
Subject: Re: pg_upgrade failing for 200+ million Large Objects
Date: 2022-09-08 23:29:10
Message-ID: CAAWbhmgUb8p7ff_ZX5jCvqM=ipPxbbDJTXMNVzH-Ho_CXVkRHA@mail.gmail.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-hackers

On Thu, Sep 8, 2022 at 4:18 PM Nathan Bossart <nathandbossart(at)gmail(dot)com> wrote:
> IIUC the main benefit of this approach is that it isn't dependent on
> binary-upgrade mode, which seems to be a goal based on the discussion
> upthread [0].

To clarify, I agree that pg_dump should contain the core fix. What I'm
questioning is the addition of --dump-options to make use of that fix
from pg_upgrade, since it also lets the user do "exciting" new things
like --exclude-schema and --include-foreign-data and so on. I don't
think we should let them do that without a good reason.

Thanks,
--Jacob

In response to

Responses

Browse pgsql-hackers by date

  From Date Subject
Next Message David Rowley 2022-09-08 23:33:37 Re: Reducing the chunk header sizes on all memory context types
Previous Message Nathan Bossart 2022-09-08 23:18:07 Re: pg_upgrade failing for 200+ million Large Objects