Re: pg_upgrade failing for 200+ million Large Objects

From: Magnus Hagander <magnus(at)hagander(dot)net>
To: Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us>
Cc: Robins Tharakan <tharakan(at)gmail(dot)com>, Peter Eisentraut <peter(dot)eisentraut(at)enterprisedb(dot)com>, "pgsql-hackers(at)postgresql(dot)org" <pgsql-hackers(at)postgresql(dot)org>
Subject: Re: pg_upgrade failing for 200+ million Large Objects
Date: 2021-03-08 17:18:12
Message-ID: CABUevEwu3_Jiqbd-Fo=9DhqoPc7_YHS+hLX8sh00BidyhEs7AQ@mail.gmail.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-hackers

On Mon, Mar 8, 2021 at 5:58 PM Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us> wrote:
>
> Magnus Hagander <magnus(at)hagander(dot)net> writes:
> > On Mon, Mar 8, 2021 at 5:33 PM Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us> wrote:
> >> It does seem that --single-transaction is a better idea than fiddling with
> >> the transaction wraparound parameters, since the latter is just going to
> >> put off the onset of trouble. However, we'd have to do something about
> >> the lock consumption. Would it be sane to have the backend not bother to
> >> take any locks in binary-upgrade mode?
>
> > I believe the problem occurs when writing them rather than when
> > reading them, and I don't think we have a binary upgrade mode there.
>
> You're confusing pg_dump's --binary-upgrade switch (indeed applied on
> the dumping side) with the backend's -b switch (IsBinaryUpgrade,
> applied on the restoring side).

Ah. Yes, I am.

> > We could invent one of course. Another option might be to exclusively
> > lock pg_largeobject, and just say that if you do that, we don't have
> > to lock the individual objects (ever)?
>
> What was in the back of my mind is that we've sometimes seen complaints
> about too many locks needed to dump or restore a database with $MANY
> tables; so the large-object case seems like just a special case.

It is -- but I guess it's more likely to have 100M large objects than
to have 100M tables. (and the cutoff point comes a lot earlier than
100M). But the fundamental onei s the same.

> The answer up to now has been "raise max_locks_per_transaction enough
> so you don't see the failure". Having now consumed a little more
> caffeine, I remember that that works in pg_upgrade scenarios too,
> since the user can fiddle with the target cluster's postgresql.conf
> before starting pg_upgrade.
>
> So it seems like the path of least resistance is
>
> (a) make pg_upgrade use --single-transaction when calling pg_restore
>
> (b) document (better) how to get around too-many-locks failures.

Agreed. Certainly seems like a better path forward than arbitrarily
pushing the limit on number of transactions which just postpones the
problem.

--
Magnus Hagander
Me: https://www.hagander.net/
Work: https://www.redpill-linpro.com/

In response to

Browse pgsql-hackers by date

  From Date Subject
Next Message Joel Jacobson 2021-03-08 17:20:02 Re: [PATCH] regexp_positions ( string text, pattern text, flags text ) → setof int4range[]
Previous Message Joel Jacobson 2021-03-08 17:14:09 Re: [PATCH] pg_permissions