| From: | Nathan Bossart <nathandbossart(at)gmail(dot)com> |
|---|---|
| To: | Andres Freund <andres(at)anarazel(dot)de> |
| Cc: | Michael Paquier <michael(at)paquier(dot)xyz>, Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us>, Nitin Motiani <nitinmotiani(at)google(dot)com>, Hannu Krosing <hannuk(at)google(dot)com>, PostgreSQL Hackers <pgsql-hackers(at)lists(dot)postgresql(dot)org> |
| Subject: | Re: pg_upgrade: transfer pg_largeobject_metadata's files when possible |
| Date: | 2026-02-05 17:36:00 |
| Message-ID: | aYTVAFTPelebVl6W@nathan |
| Views: | Whole Thread | Raw Message | Download mbox | Resend email |
| Thread: | |
| Lists: | pgsql-hackers |
On Thu, Feb 05, 2026 at 11:19:46AM -0500, Andres Freund wrote:
> It certainly seems better than what we do now. Still feels pretty grotty and
> error prone to me that we fill the catalog table and then throw the contents
> out.
Before I go any further with this approach, I thought of something else we
could do that I believe is worth considering...
As of commit 3bcfcd815e, the only reason we are dumping any of
pg_largeobject_metadata at all is to avoid an ERROR during COMMENT ON or
SECURITY LABEL ON because the call to LargeObjectExists() in
get_object_address() returns false. If we bypass that check in
binary-upgrade mode, we can skip dumping pg_largeobject_metadata entirely.
The attached patch passes our existing tests, and it seems to create the
expected binary-upgrade-mode dump files, too. I haven't updated any of the
comments yet.
--
nathan
| Attachment | Content-Type | Size |
|---|---|---|
| v2-0001-fix-pg_largeobject_metadata-file-transfer.patch | text/plain | 4.3 KB |
| From | Date | Subject | |
|---|---|---|---|
| Next Message | Nathan Bossart | 2026-02-05 17:40:50 | Re: pg_upgrade: transfer pg_largeobject_metadata's files when possible |
| Previous Message | Pavlo Golub | 2026-02-05 17:35:27 | Re: Re[2]: [PATCH] Add last_executed timestamp to pg_stat_statements |