Re: Missing pg_clog files

From: Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us>
To: Carol Walter <walterc(at)indiana(dot)edu>
Cc: Scott Marlowe <scott(dot)marlowe(at)gmail(dot)com>, Tena Sakai <tsakai(at)gallo(dot)ucsf(dot)edu>, pgsql-admin(at)postgresql(dot)org
Subject: Re: Missing pg_clog files
Date: 2008-09-24 13:10:21
Message-ID: 29313.1222261821@sss.pgh.pa.us
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-admin

Carol Walter <walterc(at)indiana(dot)edu> writes:
> I tried creating the files 0000 through 002F. Pg_dump still will not
> run. The error was as follows:

> -bash-3.00$ pg_dump -U postgres ebiz > ebiz_bk.sql
> pg_dump: SQL command failed
> pg_dump: Error message from server: ERROR: could not access status
> of transaction 20080015
> DETAIL: Could not read from file "pg_clog/0013" at offset 32768:
> Error 0.
> pg_dump: The command was: COPY ebizd.products_categories
> (category_id, product_id) TO stdout;

You need to make the files the right size (256K of zeroes).
A suitable "dd" from /dev/zero will accomplish this on modern
Unixen (ie, anything that has /dev/zero).

Note that this is by no means a fix, it simply allows pg_dump to
complete. What you are really doing by filling those files with
zeroes is saying "assume all these old transactions aborted".
You *will* have data loss. It will only affect rows that haven't
been accessed in a very long time (since at least June, looks like)
but gone is gone.

Another possibility that might be better is to fill the files with
0x55, though this is harder since /dev/zero won't help. That would
force all the old transactions to be considered committed rather than
aborted. This isn't really better from a consistency standpoint, but
if you feel that most of your data-altering commands succeed then
this might give you a closer approximation to the state you want.

The whole thing is pretty troubling because 8.2.x is supposed to
contain defenses against this type of problem. Could we see
the contents of "select datname, datfrozenxid from pg_database"?
Also, have you tried dumping individual databases instead of
pg_dumpall? (It would be good to experiment with that before
you start making bogus pg_clog files; once you do that there's
no going back in terms of recovering the true state of your data.)

regards, tom lane

PS: Since you've evidently got a lot of rows that haven't been
accessed in months, I conclude that you have not been running
routine backups. Tut tut. I trust you'll remedy that oversight
as soon as you get out of the immediate problem.

In response to

Responses

Browse pgsql-admin by date

  From Date Subject
Next Message Carol Walter 2008-09-24 13:41:28 Re: Missing pg_clog files
Previous Message Carol Walter 2008-09-24 12:42:24 Re: Missing pg_clog files