Re: Dump large DB and restore it after all.

From: Craig Ringer <craig(at)postnewspapers(dot)com(dot)au>
To: condor(at)stz-bg(dot)com
Cc: pgsql-general(at)postgresql(dot)org
Subject: Re: Dump large DB and restore it after all.
Date: 2011-07-05 10:08:21
Message-ID: 4E12E295.8050704@postnewspapers.com.au
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-general

On 5/07/2011 5:00 PM, Condor wrote:
> Hello ppl,
> can I ask how to dump large DB ?

Same as a smaller database: using pg_dump . Why are you trying to split
your dumps into 1GB files? What does that gain you?

Are you using some kind of old file system and operating system that
cannot handle files bigger than 2GB? If so, I'd be pretty worried about
running a database server on it.

As for gzip: gzip is almost perfectly safe. The only downside with gzip
is that a corrupted block in the file (due to a hard
disk/dvd/memory/tape error or whatever) makes the rest of the file,
after the corrupted block, unreadable. Since you shouldn't be storing
your backups on anything that might get corrupted blocks, that should
not be a problem. If you are worried about that, you're better off still
using gzip and using an ECC coding system like par2 to allow recovery
from bad blocks. The gzipd dump plus the par2 file will be smaller than
the uncompressed dump, and give you much better protection against
errors than an uncompressed dump will.

To learn more about par2, go here:

http://parchive.sourceforge.net/

--
Craig Ringer

POST Newspapers
276 Onslow Rd, Shenton Park
Ph: 08 9381 3088 Fax: 08 9388 2258
ABN: 50 008 917 717
http://www.postnewspapers.com.au/

In response to

Responses

Browse pgsql-general by date

  From Date Subject
Next Message Alexander Shulgin 2011-07-05 10:27:55 Select count(*) /*from*/ table
Previous Message Christian Ullrich 2011-07-05 09:46:35 Re: Dump large DB and restore it after all.