From: | "Kumar, Sachin" <ssetiya(at)amazon(dot)com> |
---|---|
To: | Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us> |
Cc: | Nathan Bossart <nathandbossart(at)gmail(dot)com>, Jan Wieck <jan(at)wi3ck(dot)info>, Bruce Momjian <bruce(at)momjian(dot)us>, Zhihong Yu <zyu(at)yugabyte(dot)com>, "Andrew Dunstan" <andrew(at)dunslane(dot)net>, Magnus Hagander <magnus(at)hagander(dot)net>, Robins Tharakan <tharakan(at)gmail(dot)com>, Peter Eisentraut <peter(dot)eisentraut(at)enterprisedb(dot)com>, "pgsql-hackers(at)postgresql(dot)org" <pgsql-hackers(at)postgresql(dot)org> |
Subject: | Re: pg_upgrade failing for 200+ million Large Objects |
Date: | 2023-12-07 14:05:13 |
Message-ID: | 240D05EC-8B28-4112-BEAB-85ECBAF3F871@amazon.com |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-hackers |
> I have updated the patch to use heuristic, During pg_upgrade we count
> Large objects per database. During pg_restore execution if db large_objects
> count is greater than LARGE_OBJECTS_THRESOLD (1k) we will use
> --restore-blob-batch-size.
I think both SECTION_DATA and SECTION_POST_DATA can be parallelized by pg_restore, So instead of storing
large objects in heuristics, we can store SECTION_DATA + SECTION_POST_DATA.
Regards
Sachin
From | Date | Subject | |
---|---|---|---|
Next Message | Daniel Gustafsson | 2023-12-07 14:06:41 | Re: initdb caching during tests |
Previous Message | Joe Conway | 2023-12-07 13:56:41 | Re: Emitting JSON to file using COPY TO |