Re: pg_dump's over 2GB

From: "Ross J(dot) Reedstrom" <reedstrm(at)rice(dot)edu>
To: pgsql-general(at)postgresql(dot)org
Subject: Re: pg_dump's over 2GB
Date: 2000-09-29 16:57:11
Message-ID: 20000929115711.B5635@rice.edu
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-general

On Fri, Sep 29, 2000 at 11:41:51AM -0500, Jeff Hoffmann wrote:
> Bryan White wrote:
> >
> > I am thinking that
> > instead I will need to pipe pg_dumps output into gzip thus avoiding the
> > creation of a file of that size.
>
> sure, i do it all the time. unfortunately, i've had it happen a few
> times where even gzipping a database dump goes over 2GB, which is a real
> PITA since i have to dump some tables individually. generally, i do

> something like
> pg_dump database | gzip > database.pgz

Hmm, how about:

pg_dump database | gzip | split -b 1024m - database_

Which will give you 1GB files, named database_aa, database_ab, etc.

> to dump the database and
> gzip -dc database.pgz | psql database

cat database_* | gunzip | psql database

Ross Reedstrom
--
Open source code is like a natural resource, it's the result of providing
food and sunshine to programmers, and then staying out of their way.
[...] [It] is not going away because it has utility for both the developers
and users independent of economic motivations. Jim Flynn, Sunnyvale, Calif.

In response to

Browse pgsql-general by date

  From Date Subject
Next Message Peter Eisentraut 2000-09-29 17:03:52 Re: Redhat 7 and PgSQL
Previous Message Adam Lang 2000-09-29 16:42:27 Fw: Redhat 7 and PgSQL