Skip site navigation (1) Skip section navigation (2)

Re: pg_dump 2GB limit?

From: Jan Wieck <janwieck(at)yahoo(dot)com>
To: Christopher Kings-Lynne <chriskl(at)familyhealth(dot)com(dot)au>
Cc: Doug McNaught <doug(at)wireboard(dot)com>,Laurette Cisneros <laurette(at)nextbus(dot)com>,pgsql-hackers(at)postgresql(dot)org
Subject: Re: pg_dump 2GB limit?
Date: 2002-03-29 19:02:43
Message-ID: (view raw, whole thread or download thread mbox)
Lists: pgsql-hackers
Christopher Kings-Lynne wrote:
> > > File size limit exceeded (core dumped)
> > >
> > > We suspect pg_dump.  Is this true?  Why would there be this limit in
> > > pg_dump?  Is it scheduled to be fixed?
> Try piping the output of pg_dump through bzip2 before writing it to disk.
> Or else, I think that pg_dump has -z or something parameters for turning
> on compression.

    And if that isn't enough, you can pipe the output (compressed
    or not) into split(1).



# It's easier to get forgiveness for being wrong than for being right. #
# Let's break this rule - forgive me.                                  #
#================================================== JanWieck(at)Yahoo(dot)com #

Do You Yahoo!?
Get your free address at

In response to

pgsql-hackers by date

Next:From: Tom LaneDate: 2002-03-29 19:27:10
Subject: Re: Autoconf upgraded
Previous:From: Peter EisentrautDate: 2002-03-29 17:39:01
Subject: Autoconf upgraded

Privacy Policy | About PostgreSQL
Copyright © 1996-2017 The PostgreSQL Global Development Group