Re: pg_dump's over 2GB

From: "Steve Wolfe" <steve(at)iboats(dot)com>
To: "pgsql-general" <pgsql-general(at)postgreSQL(dot)org>
Subject: Re: pg_dump's over 2GB
Date: 2000-09-29 16:34:01
Message-ID: 004101c02a33$1913ca80$50824e40@iboats.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-general

> My current backups made with pg_dump are currently 1.3GB. I am wondering
> what kind of headaches I will have to deal with once they exceed 2GB.
>
> What will happen with pg_dump on a Linux 2.2.14 i386 kernel when the
output
> exceeds 2GB?

There are some ways around it if your program supports it, I'm not sure if
it works with redirects...

> Currently the dump file is later fed to a 'tar cvfz'. I am thinking that
> instead I will need to pipe pg_dumps output into gzip thus avoiding the
> creation of a file of that size.

Why not just pump the data right into gzip? Something like:

pg_dumpall | gzip --stdout > pgdump.gz

(I'm sure that the more efficient shell scripters will know a better way)

If your data is anything like ours, you will get at least a 5:1
compression ratio, meaning you can actually dump around 10 gigs of data
before you hit the 2 gig file limit.

steve

In response to

Browse pgsql-general by date

  From Date Subject
Next Message Adam Lang 2000-09-29 16:38:52 Re: Redhat 7 and PgSQL
Previous Message Bryan White 2000-09-29 16:15:26 pg_dump's over 2GB