Re: pg_upgrade: transfer pg_largeobject_metadata's files when possible

From: Andres Freund <andres(at)anarazel(dot)de>
To: Nathan Bossart <nathandbossart(at)gmail(dot)com>
Cc: Michael Paquier <michael(at)paquier(dot)xyz>, Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us>, Nitin Motiani <nitinmotiani(at)google(dot)com>, Hannu Krosing <hannuk(at)google(dot)com>, PostgreSQL Hackers <pgsql-hackers(at)lists(dot)postgresql(dot)org>
Subject: Re: pg_upgrade: transfer pg_largeobject_metadata's files when possible
Date: 2026-02-05 16:19:46
Message-ID: qfp7z4c3m6bw53oky33halcdhomiijod6segzrhf757ijjvapr@e5rf4u5wnfmu
Views: Whole Thread | Raw Message | Download mbox | Resend email
Thread:
Lists: pgsql-hackers

Hi,

On 2026-02-04 10:06:29 -0600, Nathan Bossart wrote:
> IIUC your critique is that this doesn't explain the overwriting behavior
> like the older comment does. I'll work on adding that.

I think even the old comment was woefully under-documenting that the commands
are all just make work that's going to be thrown out almost immediately after.

Thanks for addressing that!

On 2026-02-04 14:08:47 -0600, Nathan Bossart wrote:
> On Wed, Feb 04, 2026 at 10:06:29AM -0600, Nathan Bossart wrote:
> > IIUC your critique is that this doesn't explain the overwriting behavior
> > like the older comment does. I'll work on adding that.
> >
> > [...]
> >
> > I'm considering a couple of options here, but it seems like the easiest
> > thing to do is to move the TRUNCATE commands to the end of the dump file.
> > At least, that seems to be sufficient for our existing tests. If that
> > seems okay to you, I can work on putting together a patch.
>
> Here is a rough first draft of a patch that does this.

It certainly seems better than what we do now. Still feels pretty grotty and
error prone to me that we fill the catalog table and then throw the contents
out.

> @@ -1157,7 +1158,10 @@ main(int argc, char **argv)
> * subsequent COMMENT and SECURITY LABEL commands work. pg_upgrade
> * can't copy/link the files from older versions because aclitem
> * (needed by pg_largeobject_metadata.lomacl) changed its storage
> - * format in v16.
> + * format in v16. At the end of the dump, we'll generate a TRUNCATE
> + * command for pg_largeobject_metadata so that it's contents are
> + * cleared in preparation for the subsequent file transfer by
> + * pg_upgrade.
> */

I'd move that comment to earlier in the paragraph, it sounds a bit like it
applies to the v16 specific bits.

> if (fout->remoteVersion >= 160000)
> lo_metadata->dataObj->filtercond = "WHERE oid IN "
> @@ -1243,6 +1247,13 @@ main(int argc, char **argv)
> for (i = 0; i < numObjs; i++)
> dumpDumpableObject(fout, dobjs[i]);
>
> + /*
> + * For binary upgrades, set relfrozenxids, relminmxids, and relfilenodes
> + * of pg_largeobject and maybe pg_largeobject_metadata, and remove all
> + * their files. We will transfer them from the old cluster as needed.
> + */
> + dumpLOTruncation(fout);
> +
> /*
> * Set up options info to ensure we dump what we want.
> */

Seems good to move this to a dedicated function, regardless of anything else.

Do I see correctly that we just rely on the ordering in the file, rather than
dependencies? That's not a complaint, I just don't know that code very well.

Greetings,

Andres Freund

In response to

Responses

Browse pgsql-hackers by date

  From Date Subject
Next Message Nazir Bilal Yavuz 2026-02-05 16:56:24 Re: Don't synchronously wait for already-in-progress IO in read stream
Previous Message Tom Lane 2026-02-05 15:46:37 Re: Planing edge case for sorts with limit on non null column