Re: pg_upgrade failing for 200+ million Large Objects

From: Jan Wieck <jan(at)wi3ck(dot)info>
To: Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us>
Cc: Magnus Hagander <magnus(at)hagander(dot)net>, Robins Tharakan <tharakan(at)gmail(dot)com>, Peter Eisentraut <peter(dot)eisentraut(at)enterprisedb(dot)com>, "pgsql-hackers(at)postgresql(dot)org" <pgsql-hackers(at)postgresql(dot)org>
Subject: Re: pg_upgrade failing for 200+ million Large Objects
Date: 2021-03-20 16:55:24
Message-ID: 147fa478-510b-18ef-5323-9c1725b2493c@wi3ck.info
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-hackers

On 3/20/21 11:23 AM, Tom Lane wrote:
> Jan Wieck <jan(at)wi3ck(dot)info> writes:
>> All that aside, the entire approach doesn't scale.
>
> Yeah, agreed. When we gave large objects individual ownership and ACL
> info, it was argued that pg_dump could afford to treat each one as a
> separate TOC entry because "you wouldn't have that many of them, if
> they're large". The limits of that approach were obvious even at the
> time, and I think now we're starting to see people for whom it really
> doesn't work.

It actually looks more like some users have millions of "small objects".
I am still wondering where that is coming from and why they are abusing
LOs in that way, but that is more out of curiosity. Fact is that they
are out there and that they cannot upgrade from their 9.5 databases,
which are now past EOL.

>
> I wonder if pg_dump could improve matters cheaply by aggregating the
> large objects by owner and ACL contents. That is, do
>
> select distinct lomowner, lomacl from pg_largeobject_metadata;
>
> and make just *one* BLOB TOC entry for each result. Then dump out
> all the matching blobs under that heading.

What I am currently experimenting with is moving the BLOB TOC entries
into the parallel data phase of pg_restore "when doing binary upgrade".
It seems to scale nicely with the number of cores in the system. In
addition to that have options for pg_upgrade and pg_restore that cause
the restore to batch them into transactions, like 10,000 objects at a
time. There was a separate thread for that but I guess it is better to
keep it all together here now.

>
> A possible objection is that it'd reduce the ability to restore blobs
> selectively, so maybe we'd need to make it optional.

I fully intend to make all this into new "options". I am afraid that
there is no one-size-fits-all solution here.
>
> Of course, that just reduces the memory consumption on the client
> side; it does nothing for the locks. Can we get away with releasing the
> lock immediately after doing an ALTER OWNER or GRANT/REVOKE on a blob?

I'm not very fond of the idea going lockless when at the same time
trying to parallelize the restore phase. That can lead to really nasty
race conditions. For now I'm aiming at batches in transactions.

Regards, Jan

--
Jan Wieck
Principle Database Engineer
Amazon Web Services

In response to

Responses

Browse pgsql-hackers by date

  From Date Subject
Next Message Tom Lane 2021-03-20 17:36:15 Re: [HACKERS] Custom compression methods
Previous Message Tom Lane 2021-03-20 16:53:40 Re: pg_upgrade failing for 200+ million Large Objects