pg_dump corrupts database?

From: Stephen Robert Norris <srn(at)commsecure(dot)com(dot)au>
To: PgSQL General ML <pgsql-general(at)postgresql(dot)org>
Subject: pg_dump corrupts database?
Date: 2003-08-06 01:18:53
Message-ID: 1060132732.19387.3.camel@ws12.commsecure.com.au
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-general

I've encountered this a few times with 7.2 and 7.3.

If I do pg_dump of some large (> 100Mb - the bigger the more likely)
database, and it gets interrupted for some reason (e.g. the target disk
fills up), the source database become corrupt. I start getting errors
like:

open of /var/lib/pgsql/data/pg_clog/0323 failed: No such file or
directory

and I have to drop/restore the table in question.

Is this a known problem? Is there some safe way to dump databases that
avoids it?

Stephen
--
Stephen Robert Norris <srn(at)commsecure(dot)com(dot)au>
CommSecure Australia Pty Ltd

Responses

Browse pgsql-general by date

  From Date Subject
Next Message Ron Johnson 2003-08-06 01:49:23 Re: How to do?
Previous Message Bruce Momjian 2003-08-06 00:51:44 Re: Hardware Performance Tuning