restore of large databases failing--any ideas?

From: s_hawkins(at)mindspring(dot)com (S(dot) Hawkins)
To: pgsql-hackers(at)postgresql(dot)org
Subject: restore of large databases failing--any ideas?
Date: 2004-04-07 19:12:45
Message-ID: a7fac81d.0404071112.2d193673@posting.google.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-hackers

Hi all,

We're using pg_dump to backup our databases. The actual pg_dump
appears to work fine. On smaller (< approx. 100 Meg) data sets, the
restore also works, but on larger data sets the restore process
consistently fails.

Other facts that may be of interest:

* We're running Postgres 7.2.3 on a more-or-less stock Red Hat 7.3
platform.
* Backup is done with "pg_dump -c -U postgres", then gzip
* Restore is via "cat <archive_file> | gunzip | psql "

The particular file I'm wrestling with at the moment is ~2.2 Gig
unzipped. If you try to restore using pg_restore, the process
immediately fails with the following:

pg_restore: [archiver] could not open input file: File too large

When the data file is gzip'd, you can at least get the restore process
started with the following:

cat archive_file.gz | gunzip | psql dbname

The above command line starts OK, but eventually fails with:

server closed the connection unexpectedly
This probably means the server terminated abnormally
before or while processing the request.
connection to server was lost

Responses

Browse pgsql-hackers by date

  From Date Subject
Next Message Bruce Momjian 2004-04-07 19:14:43 Re: locale
Previous Message Tom Lane 2004-04-07 19:07:03 Re: locale