From: | Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us> |
---|---|
To: | Chao Li <li(dot)evan(dot)chao(at)gmail(dot)com> |
Cc: | Dimitrios Apostolou <jimis(at)gmx(dot)net>, Nathan Bossart <nathandbossart(at)gmail(dot)com>, Thomas Munro <thomas(dot)munro(at)gmail(dot)com>, pgsql-hackers(at)lists(dot)postgresql(dot)org |
Subject: | Re: [PING] [PATCH v2] parallel pg_restore: avoid disk seeks when jumping short distance forward |
Date: | 2025-10-14 00:36:07 |
Message-ID: | 845486.1760402167@sss.pgh.pa.us |
Views: | Whole Thread | Raw Message | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-hackers |
Chao Li <li(dot)evan(dot)chao(at)gmail(dot)com> writes:
> I tested DEFAULT_IO_BUFFER_SIZE with 4K, 32K, 64K, 128K and 256K. Looks like increasing the buffer size doesn’t improve the performance significantly. Actually, with the buffer size 64K, 128K and 256K, the test results are very close. I tested both with lz4 and none compression. I am not suggesting tuning the buffer size. These data are only for your reference.
Yeah, I would not expect straight pg_dump/pg_restore performance
to vary very much once the buffer size gets above not-too-many KB.
The thing we are really interested in here is how fast pg_restore
can skip over unwanted table data in a large archive file, and that
I believe should be pretty sensitive to block size.
You could measure that without getting into the complexities of
parallel restore if you make a custom-format dump of a few large
tables that does not have offset data in it, and then seeing how
fast is selective restore of just the last table.
regards, tom lane
From | Date | Subject | |
---|---|---|---|
Next Message | Chao Li | 2025-10-14 01:37:45 | Re: [PING] [PATCH v2] parallel pg_restore: avoid disk seeks when jumping short distance forward |
Previous Message | Chao Li | 2025-10-14 00:27:27 | Re: Fixed a typo in comment in compress_lz4.c |