Re: pg_dump 2GB limit?

From: Jan Wieck <janwieck(at)yahoo(dot)com>
To: Christopher Kings-Lynne <chriskl(at)familyhealth(dot)com(dot)au>
Cc: Doug McNaught <doug(at)wireboard(dot)com>, Laurette Cisneros <laurette(at)nextbus(dot)com>, pgsql-hackers(at)postgresql(dot)org
Subject: Re: pg_dump 2GB limit?
Date: 2002-03-29 19:02:43
Message-ID: 200203291902.g2TJ2hQ29153@saturn.janwieck.net
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-hackers

Christopher Kings-Lynne wrote:
> > > File size limit exceeded (core dumped)
> > >
> > > We suspect pg_dump. Is this true? Why would there be this limit in
> > > pg_dump? Is it scheduled to be fixed?
>
> Try piping the output of pg_dump through bzip2 before writing it to disk.
> Or else, I think that pg_dump has -z or something parameters for turning
> on compression.

And if that isn't enough, you can pipe the output (compressed
or not) into split(1).

Jan

--

#======================================================================#
# It's easier to get forgiveness for being wrong than for being right. #
# Let's break this rule - forgive me. #
#================================================== JanWieck(at)Yahoo(dot)com #

_________________________________________________________
Do You Yahoo!?
Get your free @yahoo.com address at http://mail.yahoo.com

In response to

Browse pgsql-hackers by date

  From Date Subject
Next Message Tom Lane 2002-03-29 19:27:10 Re: Autoconf upgraded
Previous Message Peter Eisentraut 2002-03-29 17:39:01 Autoconf upgraded