Re: pg_dump additional options for performance

From: Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us>
To: Simon Riggs <simon(at)2ndquadrant(dot)com>
Cc: pgsql-hackers(at)postgresql(dot)org
Subject: Re: pg_dump additional options for performance
Date: 2008-02-26 05:39:29
Message-ID: 24991.1204004369@sss.pgh.pa.us
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-hackers

Simon Riggs <simon(at)2ndquadrant(dot)com> writes:
> ... So it would be good if we could dump objects in 3 groups
> 1. all commands required to re-create table
> 2. data
> 3. all commands required to complete table after data load

[ much subsequent discussion snipped ]

BTW, what exactly was the use-case for this? The recent discussions
about parallelizing pg_restore make it clear that the all-in-one
dump file format still has lots to recommend it. So I'm just wondering
what the actual advantage of splitting the dump into multiple files
will be. It clearly makes life more complicated; what are we buying?

regards, tom lane

In response to

Responses

Browse pgsql-hackers by date

  From Date Subject
Next Message ITAGAKI Takahiro 2008-02-26 06:19:47 Re: Batch update of indexes on data loading
Previous Message Joshua D. Drake 2008-02-26 05:18:57 Re: 8.3 / 8.2.6 restore comparison