| From: | Michael Paquier <michael(at)paquier(dot)xyz> |
|---|---|
| To: | Corey Huinker <corey(dot)huinker(at)gmail(dot)com> |
| Cc: | Nathan Bossart <nathandbossart(at)gmail(dot)com>, pgsql-hackers(at)postgresql(dot)org |
| Subject: | Re: improve performance of pg_dump --binary-upgrade |
| Date: | 2024-04-18 06:24:12 |
| Message-ID: | ZiC8jEaFahXq9aAu@paquier.xyz |
| Views: | Whole Thread | Raw Message | Download mbox | Resend email |
| Thread: | |
| Lists: | pgsql-hackers |
On Thu, Apr 18, 2024 at 02:08:28AM -0400, Corey Huinker wrote:
> Bar-napkin math tells me in a worst-case architecture and braindead byte
> alignment, we'd burn 64 bytes per struct, so the 100K tables cited would be
> about 6.25MB of memory.
>
> The obvious low-memory alternative would be to make a prepared statement,
> though that does nothing to cut down on the roundtrips.
>
> I think this is a good trade off.
I've not checked the patch in details or tested it, but caching this
information to gain this speed sounds like a very good thing.
--
Michael
| From | Date | Subject | |
|---|---|---|---|
| Next Message | Ajin Cherian | 2024-04-18 06:26:22 | Re: Slow catchup of 2PC (twophase) transactions on replica in LR |
| Previous Message | Donghang Lin | 2024-04-18 06:12:52 | Re: Optimizing nbtree ScalarArrayOp execution, allowing multi-column ordered scans, skip scan |