From: | Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us> |
---|---|
To: | Stephen Frost <sfrost(at)snowman(dot)net> |
Cc: | Peter Eisentraut <peter(dot)eisentraut(at)2ndquadrant(dot)com>, pgsql-hackers <pgsql-hackers(at)postgresql(dot)org> |
Subject: | Re: pg_dump test instability |
Date: | 2018-08-27 14:45:58 |
Message-ID: | 1665.1535381158@sss.pgh.pa.us |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-hackers |
Stephen Frost <sfrost(at)snowman(dot)net> writes:
> * Tom Lane (tgl(at)sss(dot)pgh(dot)pa(dot)us) wrote:
>> However, at least for the directory-format case (which I think is the
>> only one supported for parallel restore), we could make it compare the
>> file sizes of the TABLE DATA items. That'd work pretty well as a proxy
>> for both the amount of effort needed for table restore, and the amount
>> of effort needed to build indexes on the tables afterwards.
> Parallel restore also works w/ custom-format dumps.
Really. Well then the existing code is even more broken, because it
only does this sorting for directory output:
/* If we do a parallel dump, we want the largest tables to go first */
if (archiveFormat == archDirectory && numWorkers > 1)
sortDataAndIndexObjectsBySize(dobjs, numObjs);
so that parallel restore is completely left in the lurch with a
custom-format dump.
But I imagine we can get some measure of table data size out of a custom
dump too.
regards, tom lane
From | Date | Subject | |
---|---|---|---|
Next Message | Alexander Korotkov | 2018-08-27 15:38:40 | Re: [HACKERS] WIP: long transactions on hot standby feedback replica / proof of concept |
Previous Message | Stephen Frost | 2018-08-27 14:41:38 | Re: pg_dump test instability |