Re: pg_upgrade failing for 200+ million Large Objects

From: Nathan Bossart <nathandbossart(at)gmail(dot)com>
To: Jacob Champion <jchampion(at)timescale(dot)com>
Cc: Jan Wieck <jan(at)wi3ck(dot)info>, Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us>, Bruce Momjian <bruce(at)momjian(dot)us>, Zhihong Yu <zyu(at)yugabyte(dot)com>, Andrew Dunstan <andrew(at)dunslane(dot)net>, Magnus Hagander <magnus(at)hagander(dot)net>, Robins Tharakan <tharakan(at)gmail(dot)com>, Peter Eisentraut <peter(dot)eisentraut(at)enterprisedb(dot)com>, "pgsql-hackers(at)postgresql(dot)org" <pgsql-hackers(at)postgresql(dot)org>
Subject: Re: pg_upgrade failing for 200+ million Large Objects
Date: 2022-09-08 23:34:07
Message-ID: 20220908233407.GA2244644@nathanxps13
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-hackers

On Thu, Sep 08, 2022 at 04:29:10PM -0700, Jacob Champion wrote:
> On Thu, Sep 8, 2022 at 4:18 PM Nathan Bossart <nathandbossart(at)gmail(dot)com> wrote:
>> IIUC the main benefit of this approach is that it isn't dependent on
>> binary-upgrade mode, which seems to be a goal based on the discussion
>> upthread [0].
>
> To clarify, I agree that pg_dump should contain the core fix. What I'm
> questioning is the addition of --dump-options to make use of that fix
> from pg_upgrade, since it also lets the user do "exciting" new things
> like --exclude-schema and --include-foreign-data and so on. I don't
> think we should let them do that without a good reason.

Ah, yes, I think that is a fair point.

--
Nathan Bossart
Amazon Web Services: https://aws.amazon.com

In response to

Responses

Browse pgsql-hackers by date

  From Date Subject
Next Message Jacob Champion 2022-09-09 00:02:00 Re: Patch proposal: make use of regular expressions for the username in pg_hba.conf
Previous Message David Rowley 2022-09-08 23:33:37 Re: Reducing the chunk header sizes on all memory context types