Re: pg_dump additional options for performance

From: Magnus Hagander <magnus(at)hagander(dot)net>
To: Andrew Dunstan <andrew(at)dunslane(dot)net>
Cc: Simon Riggs <simon(at)2ndquadrant(dot)com>, Tom Dunstan <pgsql(at)tomd(dot)cc>, Dimitri Fontaine <dfontaine(at)hi-media(dot)com>, pgsql-hackers(at)postgresql(dot)org, Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us>
Subject: Re: pg_dump additional options for performance
Date: 2008-02-26 14:12:26
Message-ID: 20080226141226.GQ528@svr2.hagander.net
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-hackers

On Tue, Feb 26, 2008 at 08:28:11AM -0500, Andrew Dunstan wrote:
>
>
> Simon Riggs wrote:
> >Separate files seems much simpler...
> >
> >
>
> Yes, We need to stick to the KISS principle.
>
> ISTM that we could simply invent a new archive format of "d" for directory.

Yeah, you can always ZIP (or whatever) the resulting directory when you're
done..

But looking at it from a "backup tool perspective", like if you want to
integrate it in your network backup solution, that might make it harder.
Being able to deliver over a single, or over multiple, pipes is what's
needed there. If you need to dump it to disk first and can only "pick it
up" later, that'll require a lot more I/O and disk space.

But I'm not sure that's a concern we need to think about in this case,
just wanted to mention it.

> BTW, parallel dumping might be important, but is really much less so
> than parallel restoring in my book.

By far. The only case where you want the backup to max out your system
would be during an "offline upgrade"... You don't want a regular backup to
max things out, because it will slow other things down. Whereas if you're
doing a restore, you most likely want your data back up ASAP.

//Magnus

In response to

Responses

Browse pgsql-hackers by date

  From Date Subject
Next Message Simon Riggs 2008-02-26 14:15:33 Re: pg_dump additional options for performance
Previous Message Roberts, Jon 2008-02-26 14:10:09 Re: pgAgent job limit