Re: dealing with file size when archiving databases

From: Tino Wildenhain <tino(at)wildenhain(dot)de>
To: "Andrew L(dot) Gould" <algould(at)datawok(dot)com>
Cc: Postgresql-General <pgsql-general(at)postgresql(dot)org>
Subject: Re: dealing with file size when archiving databases
Date: 2005-06-21 06:06:42
Message-ID: 1119334002.1183.125.camel@Andrea.peacock.de
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-general

Am Montag, den 20.06.2005, 21:28 -0500 schrieb Andrew L. Gould:
> I've been backing up my databases by piping pg_dump into gzip and
> burning the resulting files to a DVD-R. Unfortunately, FreeBSD has
> problems dealing with very large files (>1GB?) on DVD media. One of my
> compressed database backups is greater than 1GB; and the results of a
> gzipped pg_dumpall is approximately 3.5GB. The processes for creating
> the iso image and burning the image to DVD-R finish without any
> problems; but the resulting file is unreadable/unusable.
>
> My proposed solution is to modify my python script to:
>
> 1. use pg_dump to dump each database's tables individually, including
> both the database and table name in the file name;
> 3. use 'pg_dumpall -g' to dump the global information; and
> 4. burn the backup directories, files and a recovery script to DVD-R.
>
> The script will pipe pg_dump into gzip to compress the files.

I'd use pg_dump -Fc instead. It is compressed and you get some more
options for restore for free (selective restore for example)

> My questions are:
>
> 1. Will 'pg_dumpall -g' dump everything not dumped by pg_dump? Will I
> be missing anything?
> 2. Does anyone foresee any problems with the solution above?

Yes, the files might be too big for one DVD at a time.

In response to

Browse pgsql-general by date

  From Date Subject
Next Message postgresql 2005-06-21 07:16:08 Scripting issues
Previous Message Oliver Jowett 2005-06-21 04:57:10 Re: Escape handling in strings