Skip site navigation (1) Skip section navigation (2)

Re: Should pg_dump dump larger tables first?

From: Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us>
To: "David Rowley" <dgrowleyml(at)gmail(dot)com>
Cc: pgsql-hackers(at)postgresql(dot)org
Subject: Re: Should pg_dump dump larger tables first?
Date: 2013-01-29 23:34:30
Message-ID: (view raw, whole thread or download thread mbox)
Lists: pgsql-hackers
"David Rowley" <dgrowleyml(at)gmail(dot)com> writes:
> If pg_dump was to still follow the dependencies of objects, would there be
> any reason why it shouldn't backup larger tables first?

Pretty much every single discussion/complaint about pg_dump's ordering
choices has been about making its behavior more deterministic not less
so.  So I can't imagine such a change would go over well with most folks.

Also, it's far from obvious to me that "largest first" is the best rule
anyhow; it's likely to be more complicated than that.

But anyway, the right place to add this sort of consideration is in
pg_restore --parallel, not pg_dump.  I don't know how hard it would be
for the scheduler algorithm in there to take table size into account,
but at least in principle it should be possible to find out the size of
the (compressed) table data from examination of the archive file.

			regards, tom lane

In response to


pgsql-hackers by date

Next:From: Tom LaneDate: 2013-01-30 00:40:28
Subject: Re: Hm, table constraints aren't so unique as all that
Previous:From: Craig RingerDate: 2013-01-29 23:15:20
Subject: Re: [sepgsql 2/3] Add db_schema:search permission checks

Privacy Policy | About PostgreSQL
Copyright © 1996-2018 The PostgreSQL Global Development Group