Skip site navigation (1) Skip section navigation (2)

Re: Missing pg_clog files

From: Carol Walter <walterc(at)indiana(dot)edu>
To: Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us>
Cc: Scott Marlowe <scott(dot)marlowe(at)gmail(dot)com>, Tena Sakai <tsakai(at)gallo(dot)ucsf(dot)edu>, pgsql-admin(at)postgresql(dot)org
Subject: Re: Missing pg_clog files
Date: 2008-09-24 14:31:05
Message-ID: FE8FD34C-4516-425A-B7AD-1917F134320F@indiana.edu (view raw or flat)
Thread:
Lists: pgsql-admin
Hi, Tom,

Are the files that contain the hex characters supposed to contain a  
single string and no control characters?  I used Excel to create the  
files, but I'll have to edit them when I get them to the Solaris box  
to take out any bad characters.

I'm also wondering if, after I create the dummy files, and pg_dump  
works, I could restore an old pg_dumpall file and then insert any  
data that aren't there from the pg_dumps.

Carol

On Sep 24, 2008, at 9:10 AM, Tom Lane wrote:

> Carol Walter <walterc(at)indiana(dot)edu> writes:
>> I tried creating the files 0000 through 002F.  Pg_dump still will not
>> run.  The error was as follows:
>
>> -bash-3.00$ pg_dump -U postgres ebiz > ebiz_bk.sql
>> pg_dump: SQL command failed
>> pg_dump: Error message from server: ERROR:  could not access status
>> of transaction 20080015
>> DETAIL:  Could not read from file "pg_clog/0013" at offset 32768:
>> Error 0.
>> pg_dump: The command was: COPY ebizd.products_categories
>> (category_id, product_id) TO stdout;
>
> You need to make the files the right size (256K of zeroes).
> A suitable "dd" from /dev/zero will accomplish this on modern
> Unixen (ie, anything that has /dev/zero).
>
> Note that this is by no means a fix, it simply allows pg_dump to
> complete.  What you are really doing by filling those files with
> zeroes is saying "assume all these old transactions aborted".
> You *will* have data loss.  It will only affect rows that haven't
> been accessed in a very long time (since at least June, looks like)
> but gone is gone.
>
> Another possibility that might be better is to fill the files with
> 0x55, though this is harder since /dev/zero won't help.  That would
> force all the old transactions to be considered committed rather than
> aborted.  This isn't really better from a consistency standpoint, but
> if you feel that most of your data-altering commands succeed then
> this might give you a closer approximation to the state you want.
>
> The whole thing is pretty troubling because 8.2.x is supposed to
> contain defenses against this type of problem.  Could we see
> the contents of "select datname, datfrozenxid from pg_database"?
> Also, have you tried dumping individual databases instead of
> pg_dumpall?  (It would be good to experiment with that before
> you start making bogus pg_clog files; once you do that there's
> no going back in terms of recovering the true state of your data.)
>
> 			regards, tom lane
>
> PS: Since you've evidently got a lot of rows that haven't been
> accessed in months, I conclude that you have not been running
> routine backups.  Tut tut.  I trust you'll remedy that oversight
> as soon as you get out of the immediate problem.
>
> -- 
> Sent via pgsql-admin mailing list (pgsql-admin(at)postgresql(dot)org)
> To make changes to your subscription:
> http://www.postgresql.org/mailpref/pgsql-admin


In response to

Responses

pgsql-admin by date

Next:From: Tom LaneDate: 2008-09-24 15:44:22
Subject: Re: Missing pg_clog files
Previous:From: Devrim GÜNDÜZDate: 2008-09-24 14:25:18
Subject: Re: [GENERAL] 8.3.4 rpms for Opensuse10.3 64bit

Privacy Policy | About PostgreSQL
Copyright © 1996-2014 The PostgreSQL Global Development Group