Re: Horribly slow pg_upgrade performance with many Large Objects

From: Hannu Krosing <hannuk(at)google(dot)com>
To: Nathan Bossart <nathandbossart(at)gmail(dot)com>
Cc: Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us>, PostgreSQL Hackers <pgsql-hackers(at)lists(dot)postgresql(dot)org>
Subject: Re: Horribly slow pg_upgrade performance with many Large Objects
Date: 2025-07-09 14:52:16
Message-ID: CAMT0RQStPtHfKwowd88Q0tynX0x=uJSKn=ihP8syhDJ6cH3DHQ@mail.gmail.com
Views: Whole Thread | Raw Message | Download mbox | Resend email
Thread:
Lists: pgsql-hackers

On Tue, Jul 8, 2025 at 11:06 PM Nathan Bossart <nathandbossart(at)gmail(dot)com> wrote:
>
> On Sun, Jul 06, 2025 at 02:48:08PM +0200, Hannu Krosing wrote:
> > Did a quick check of the patch and it seems to work ok.
>
> Thanks for taking a look.
>
> > What do you think of the idea of not dumping pg_shdepend here, but
> > instead adding the required entries after loading
> > pg_largeobject_metadata based on the contents of it ?
>
> While not dumping it might save a little space during upgrade, the query
> seems to be extremely slow. So, I don't see any strong advantage.

Yeah, looks like the part that avoids duplicates made it slow.

If you run it without the last WHERE it is reasonably fast. And it
behaves the same as just inserting from the dump which also does not
have any checks against duplicates.

In response to

Responses

Browse pgsql-hackers by date

  From Date Subject
Next Message Rintaro Ikeda 2025-07-09 14:58:32 Re: Suggestion to add --continue-client-on-abort option to pgbench
Previous Message Aleksander Alekseev 2025-07-09 14:45:42 Re: encode/decode support for base64url