Re: pg_dump's over 2GB

From: Jeff Hoffmann <jeff(at)propertykey(dot)com>
To: Bryan White <bryan(at)arcamax(dot)com>, pgsql-general(at)postgreSQL(dot)org
Subject: Re: pg_dump's over 2GB
Date: 2000-09-29 16:41:51
Message-ID: 39D4C64F.378F09BB@propertykey.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-general

Bryan White wrote:
>
> I am thinking that
> instead I will need to pipe pg_dumps output into gzip thus avoiding the
> creation of a file of that size.
>

sure, i do it all the time. unfortunately, i've had it happen a few
times where even gzipping a database dump goes over 2GB, which is a real
PITA since i have to dump some tables individually. generally, i do
something like
pg_dump database | gzip > database.pgz
to dump the database and
gzip -dc database.pgz | psql database
to restore it. i've always thought that compress should be an option
for pg_dump, but it's really not that much more work to just pipe the
input and output through gzip.

--

Jeff Hoffmann
PropertyKey.com

In response to

Responses

Browse pgsql-general by date

  From Date Subject
Next Message Adam Lang 2000-09-29 16:42:27 Fw: Redhat 7 and PgSQL
Previous Message Lamar Owen 2000-09-29 16:39:00 Re: Redhat 7 and PgSQL