Re: improve performance of pg_dump with many sequences

From: Nathan Bossart <nathandbossart(at)gmail(dot)com>
To: Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us>
Cc: Michael Paquier <michael(at)paquier(dot)xyz>, Euler Taveira <euler(at)eulerto(dot)com>, pgsql-hackers <pgsql-hackers(at)postgresql(dot)org>
Subject: Re: improve performance of pg_dump with many sequences
Date: 2026-01-07 23:06:22
Message-ID: aV7m7rTty4KAb-tF@nathan
Views: Whole Thread | Raw Message | Download mbox | Resend email
Thread:
Lists: pgsql-hackers

I'm still looking into this, but here are some preliminary thoughts.

On Mon, Dec 29, 2025 at 12:26:01PM -0500, Tom Lane wrote:
> In the no-good-deed-goes-unpunished department: pg_dump's use
> of pg_get_sequence_data() (nee pg_sequence_read_tuple()) is
> evidently responsible for the complaint in bug #19365 [1]
> that pg_dump can no longer survive concurrent sequence drops.

This seems to be reproducible on older versions. With a well-timed sleep
right before dumpSequenceData()'s pre-v18 query, I can produce a
relation-does-not-exist error with a concurrent sequence drop. Perhaps v18
made this easier to reach, but given it moved the sequence tuple access to
collectSequences()'s query, I'm not sure why that would be.

> BTW, I'm unconvinced that pg_dump behaves sanely when this function
> does return nulls. I think the ideal thing would be for it to skip
> issuing setval(), but right now it looks like it will issue one with
> garbage values.

Before v18, pg_dump just ERRORs due to insufficient privileges on a
sequence. IMHO that makes sense. If you ask pg_dump to dump something you
don't have privileges on, I'd expect it to error instead of silently
skipping it.

--
nathan

In response to

Responses

Browse pgsql-hackers by date

  From Date Subject
Next Message Jacob Champion 2026-01-07 23:11:24 Re: pg_plan_advice
Previous Message Álvaro Herrera 2026-01-07 22:57:56 Re: Issues with ON CONFLICT UPDATE and REINDEX CONCURRENTLY