Re: pg_dump sort priority mismatch for large objects

From: Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us>
To: Nathan Bossart <nathandbossart(at)gmail(dot)com>
Cc: Nitin Motiani <nitinmotiani(at)google(dot)com>, Hannu Krosing <hannuk(at)google(dot)com>, PostgreSQL Hackers <pgsql-hackers(at)lists(dot)postgresql(dot)org>
Subject: Re: pg_dump sort priority mismatch for large objects
Date: 2025-07-10 17:38:40
Message-ID: 1605840.1752169120@sss.pgh.pa.us
Views: Whole Thread | Raw Message | Download mbox | Resend email
Thread:
Lists: pgsql-hackers

Nathan Bossart <nathandbossart(at)gmail(dot)com> writes:
> On Thu, Jul 10, 2025 at 06:05:26PM +0530, Nitin Motiani wrote:
>> I looked through the history of this to see how this happened and if it
>> could be an existing issue. Prior to a45c78e3284b, dumpLO used to put large
>> objects in SECTION_PRE_DATA. That commit changed dumpLO and also changed
>> addBoundaryDependencies to move DO_LARGE_OBJECT from pre-data to data
>> section. Seems like since then this has been inconsistent with
>> pg_dump_sort.c. I think the change in pg_dump_sort.c should be backported
>> to PG17 & 18 independent of the state of the larger patch.

> +1, if for no other reason than we'll need it to be below PRIO_TABLE_DATA
> for the speed-up-pg_upgrade-with-many-LOs patch [0]. Does anyone see any
> problems with applying something like the following down to v17?

That's clearly an oversight in a45c78e3284b. I agree that fixing
pg_dump_sort.c to match shouldn't create any functional difficulties.
It might make the topological sort step marginally faster by
reducing the number of ordering violations that have to be fixed.

regards, tom lane

In response to

Responses

Browse pgsql-hackers by date

  From Date Subject
Next Message Dimitrios Apostolou 2025-07-10 17:39:10 Re: [PING] fallocate() causes btrfs to never compress postgresql files
Previous Message Nathan Bossart 2025-07-10 17:31:56 pg_dump sort priority mismatch for large objects