| From: | Andres Freund <andres(at)anarazel(dot)de> |
|---|---|
| To: | Nathan Bossart <nathandbossart(at)gmail(dot)com> |
| Cc: | Michael Paquier <michael(at)paquier(dot)xyz>, Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us>, Nitin Motiani <nitinmotiani(at)google(dot)com>, Hannu Krosing <hannuk(at)google(dot)com>, PostgreSQL Hackers <pgsql-hackers(at)lists(dot)postgresql(dot)org> |
| Subject: | Re: pg_upgrade: transfer pg_largeobject_metadata's files when possible |
| Date: | 2026-02-12 21:46:30 |
| Message-ID: | aY5FaQuLTA0jFinh@alap3.anarazel.de |
| Views: | Whole Thread | Raw Message | Download mbox | Resend email |
| Thread: | |
| Lists: | pgsql-hackers |
Hi,
On 2026-02-11 15:00:51 -0600, Nathan Bossart wrote:
> On Sun, Feb 08, 2026 at 04:00:40PM -0600, Nathan Bossart wrote:
> > IIRC the issue is that getTableAttrs() won't pick up the OID column on
> > older versions. It might be easy to fix that by adjusting its query for
> > binary upgrades from <v12. That could be worth doing, if for no other
> > reason than to simplify some of the pg_dump code. I'll make a note of it.
>
> This was a little more painful than I expected, but this seems to be what
> is required to allow COPY-ing pg_largeobject_metadata during binary
> upgrades from < v12.
Nice!
> @@ -2406,11 +2404,14 @@ dumpTableData_copy(Archive *fout, const void *dcontext)
> column_list = fmtCopyColumnList(tbinfo, clistBuf);
>
> /*
> - * Use COPY (SELECT ...) TO when dumping a foreign table's data, and when
> - * a filter condition was specified. For other cases a simple COPY
> - * suffices.
> + * Use COPY (SELECT ...) TO when dumping a foreign table's data, when a
> + * filter condition was specified, and when in binary upgrade mode and
> + * dumping an old pg_largeobject_metadata defined WITH OIDS. For other
> + * cases a simple COPY suffices.
> */
> - if (tdinfo->filtercond || tbinfo->relkind == RELKIND_FOREIGN_TABLE)
> + if (tdinfo->filtercond || tbinfo->relkind == RELKIND_FOREIGN_TABLE ||
> + (fout->dopt->binary_upgrade && fout->remoteVersion < 120000 &&
> + tbinfo->dobj.catId.oid == LargeObjectMetadataRelationId))
> {
> /* Temporary allows to access to foreign tables to dump data */
> if (tbinfo->relkind == RELKIND_FOREIGN_TABLE)
Not really the fault of this patch, but it seems somewhat grotty that this has
binary upgrade specific code in this place. I was certainly confused when
first trying to use pg_dump in binary upgrade mode with large objects, because
no data was dumped when using plain text mode, which is what I had been using
for simplicity...
> @@ -2406,11 +2404,14 @@ dumpTableData_copy(Archive *fout, const void *dcontext)
> column_list = fmtCopyColumnList(tbinfo, clistBuf);
>
> /*
> - * Use COPY (SELECT ...) TO when dumping a foreign table's data, and when
> - * a filter condition was specified. For other cases a simple COPY
> - * suffices.
> + * Use COPY (SELECT ...) TO when dumping a foreign table's data, when a
> + * filter condition was specified, and when in binary upgrade mode and
> + * dumping an old pg_largeobject_metadata defined WITH OIDS. For other
> + * cases a simple COPY suffices.
> */
> - if (tdinfo->filtercond || tbinfo->relkind == RELKIND_FOREIGN_TABLE)
> + if (tdinfo->filtercond || tbinfo->relkind == RELKIND_FOREIGN_TABLE ||
> + (fout->dopt->binary_upgrade && fout->remoteVersion < 120000 &&
> + tbinfo->dobj.catId.oid == LargeObjectMetadataRelationId))
> {
> /* Temporary allows to access to foreign tables to dump data */
> if (tbinfo->relkind == RELKIND_FOREIGN_TABLE)
I guess you could instead generate a COPY using WITH OIDS. But it's probably
not worth having that path, given we already need to support COPY (SELECT ..).
OTOH, I think it'd perhaps avoid needing to deal with this:
> @@ -9442,7 +9428,18 @@ getTableAttrs(Archive *fout, TableInfo *tblinfo, int numTables)
> "(pt.classoid = co.tableoid AND pt.objoid = co.oid)\n");
>
> appendPQExpBufferStr(q,
> - "WHERE a.attnum > 0::pg_catalog.int2\n"
> + "WHERE a.attnum > 0::pg_catalog.int2\n");
> +
> + /*
> + * For binary upgrades from <v12, be sure to pick up
> + * pg_largeobject_metadata's oid column.
> + */
> + if (fout->dopt->binary_upgrade && fout->remoteVersion < 120000)
> + appendPQExpBufferStr(q,
> + "OR (a.attnum = -2::pg_catalog.int2 AND src.tbloid = "
> + CppAsString2(LargeObjectMetadataRelationId) ")\n");
> +
as we'd just include the oid column without needing to somehow include it in
the attribute list.
> @@ -9544,7 +9541,9 @@ getTableAttrs(Archive *fout, TableInfo *tblinfo, int numTables)
>
> for (int j = 0; j < numatts; j++, r++)
> {
> - if (j + 1 != atoi(PQgetvalue(res, r, i_attnum)))
> + if (j + 1 != atoi(PQgetvalue(res, r, i_attnum)) &&
> + !(fout->dopt->binary_upgrade && fout->remoteVersion < 120000 &&
> + tbinfo->dobj.catId.oid == LargeObjectMetadataRelationId))
> pg_fatal("invalid column numbering in table \"%s\"",
> tbinfo->dobj.name);
> tbinfo->attnames[j] = pg_strdup(PQgetvalue(res, r, i_attname));
> --
> 2.50.1 (Apple Git-155)
I guess WITH OIDs also would avoid the need for this.
Greetings,
Andres Freund
| From | Date | Subject | |
|---|---|---|---|
| Next Message | Nathan Bossart | 2026-02-12 21:59:24 | Re: pg_upgrade: transfer pg_largeobject_metadata's files when possible |
| Previous Message | Andres Freund | 2026-02-12 21:25:49 | Re: Speed up COPY TO text/CSV parsing using SIMD |