Re: pg_upgrade failing for 200+ million Large Objects

From: Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us>
To: Robins Tharakan <tharakan(at)gmail(dot)com>
Cc: Magnus Hagander <magnus(at)hagander(dot)net>, Peter Eisentraut <peter(dot)eisentraut(at)enterprisedb(dot)com>, "pgsql-hackers(at)postgresql(dot)org" <pgsql-hackers(at)postgresql(dot)org>
Subject: Re: pg_upgrade failing for 200+ million Large Objects
Date: 2021-03-08 16:33:02
Message-ID: 1742698.1615221182@sss.pgh.pa.us
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-hackers

Robins Tharakan <tharakan(at)gmail(dot)com> writes:
> On Mon, 8 Mar 2021 at 23:34, Magnus Hagander <magnus(at)hagander(dot)net> wrote:
>> Without looking, I would guess it's the schema reload using
>> pg_dump/pg_restore and not actually pg_upgrade itself. This is a known
>> issue in pg_dump/pg_restore. And if that is the case -- perhaps just
>> running all of those in a single transaction would be a better choice?
>> One could argue it's still not a proper fix, because we'd still have a
>> huge memory usage etc, but it would then only burn 1 xid instead of
>> 500M...

> (I hope I am not missing something but) When I tried to force pg_restore to
> use a single transaction (by hacking pg_upgrade's pg_restore call to use
> --single-transaction), it too failed owing to being unable to lock so many
> objects in a single transaction.

It does seem that --single-transaction is a better idea than fiddling with
the transaction wraparound parameters, since the latter is just going to
put off the onset of trouble. However, we'd have to do something about
the lock consumption. Would it be sane to have the backend not bother to
take any locks in binary-upgrade mode?

regards, tom lane

In response to

Responses

Browse pgsql-hackers by date

  From Date Subject
Next Message Matthias van de Meent 2021-03-08 16:33:40 Re: Improvements and additions to COPY progress reporting
Previous Message Robert Haas 2021-03-08 16:26:08 Re: pg_amcheck contrib application