Re: Missing pg_clog files

From: Carol Walter <walterc(at)indiana(dot)edu>
To: Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us>
Cc: Scott Marlowe <scott(dot)marlowe(at)gmail(dot)com>, Tena Sakai <tsakai(at)gallo(dot)ucsf(dot)edu>, pgsql-admin(at)postgresql(dot)org
Subject: Re: Missing pg_clog files
Date: 2008-09-24 13:41:28
Message-ID: F6EE2346-B706-43CC-947D-B3E27F34B7E4@indiana.edu
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-admin

Yes, as an experienced dba, there ist no excuse for not having the
routine backups. I do it for the other instances but not this one.
It's student contains student databases. The students are required
to do there own backups. The problem is, the database got used for
some non-student data. I should have anticipated that since the
resource is there someone will use it. =) There are three small
databases that I need.

I'm going to create the files again with "0x55". I need 256K hex
characters?

I did try to dump individual databases. Since I need these three
that was the first thing I did.

If I upgrade to the highest version of 8.2, do I still have to do the
dump and restore? That may be a dumb question, but I've seen some
emails that seem to say that I don't.

Here is the results of the query you asked for.

datname | datfrozenxid
------------------+--------------
postgres | 524
aapike | 524
acmweb | 524
aeprice | 524
sgadkari | 524
template0 | 524
ahanders | 524
ajkurtz | 524
akeebaug | 524
aloehrle | 524
amfalcon | 524
amihnen | 524
amlmonro | 524
andmcilw | 524
arhoda | 524
arsteven | 524
asist | 524
askschoo | 524
atawfik | 524
awead | 524
barb | 524
benpeck | 524
bepnelso | 524
berry3 | 524
bflesher | 524
biblio | 524
bjc2 | 524
blbeals | 524
blfay | 524
brichwin | 524
brog | 524
burtont | 524
cartermt | 524
cdwhitlo | 524
cgoodbee | 524
chbishop | 524
clschwie | 524
cmdablog | 524
cmfriend | 524
cwestbro | 524
daltenho | 524
datnguye | 524
davisjs | 524
dlafemin | 524
dlgriggs | 524
dotsonm | 524
dpierz | 524
dsa | 524
dtdo | 524
wke | 524
l548s07c | 524
jm | 524
dbicknel | 524
dwray | 524
eaodonne | 524
jeejacks | 524
edewert | 524
eeich | 524
efhardy | 524
ellwrigh | 524
emerya | 524
emlcoope | 524
emudave | 524
eschramm | 524
bkjacob | 524
jkulneva | 524
kuepeter | 524
ys3 | 524
cepynes | 524
flashb | 524
fullera | 524
gabwong | 524
hbusch | 524
hcapocci | 524
hiteaw | 524
hjtolber | 524
ingschne | 524
iplanton | 524
jajcdb | 524
jfieber | 524
jiwan | 524
jku | 524
josreyes | 524
jowarren | 524
jplong | 524
jschuenz | 524
jtweedy | 524
kacates | 524
karpaden | 524
kbivcsi | 524
kcentann | 524
kcfreder | 524
kcostin | 524
hrosenba | 524
stjmarsh | 524
rvarick | 524
prasadm | 524
kdlib | 524
khenrich | 524
kiyang | 524
kmane | 524
kmauer | 524
knbayles | 524
knoubani | 524
kseki | 524
l546f06a | 524
l548s06a | 524
lair_medinfer | 524
lbikoff | 524
lee55 | 524
leemchri | 524
jacksonj | 524
ageorges | 524
austroud | 524
bmoriari | 524
broos | 524
ceich | 524
edawidow | 524
ljlangnet | 524
ljohnsto | 524
lkaiser2 | 524
lkhooper | 524
lmolefi | 524
ltian | 524
lucas_dictionary | 524
lucas_genedb | 524
lucas_proteindb | 524
macci | 524
magpeter | 524
epoirier | 524
hnethert | 524
jgaley | 524
jtwelty | 524
jwalrath | 524
mamablogs | 524
mapfinder | 524
markane | 524
mcglass | 524
meho | 524
mfr | 524
mmsommer | 524
mnapier | 524
moore35 | 524
morrisjm | 524
mosse | 524
msohl | 524
mtl554 | 524
nachase | 524
ngarrett | 524
nirobins | 524
nlgeorge | 524
nsfitwf | 524
jwoomer | 524
kekbia | 524
koulikom | 524
ksd | 524
lsisler | 524
mwourms | 524
nucleus | 524
omthomas | 524
naalsham | 524
nansuwan | 524
nfcapps | 524
nwahrman | 524
oescue | 524
plpierso | 524
ppatil | 524
psbright | 524
oncosifter | 524
otdelong | 524
paolillo | 524
penwang | 524
perezh | 524
phppgadmin | 524
places | 524
pldillon | 524
prodes | 524
pwelsch | 524
qadrupal | 524
rduhon | 524
rdwillis | 524
repotter | 524
rgao | 524
rkcsi | 524
rklusman | 524
rmukkama | 524
rosea | 524
rosenbsj | 524
rpherwan | 524
rtolnay | 524
sagoodwi | 524
sakram | 524
sambre | 524
scott6 | 524
sestumpf | 524
sghurd | 524
shawd | 524
sjt | 524
sjunk | 524
skashwan | 524
skonkiel | 524
slisprot | 524
slsingle | 524
slspangl | 524
smercure | 524
sp23 | 524
spencers | 524
sprao | 524
spraocal | 524
spraoit | 524
stritt | 524
switzers | 524
tbjacobs | 524
rbrubach | 524
saaalshe | 524
template1 | 524
tigan | 524
tlcamero | 524
tlennis | 524
tlmiles | 524
tneirync | 524
trec | 524
tvdwyer | 524
upriss | 524
l548s07b | 524
videob | 524
vkluehrs | 524
wemigh | 524
wsams | 524
xyao | 524
yasun | 524
yufu | 524
yuwang2 | 524
yz12 | 524
rdurrer | 524
rbain | 524
jgottwig | 524
gallantm | 524
ajwei | 524
rpvander | 524
l548s07a | 524
sbluemle | 524
sstrahl | 524
stevecox | 524
vcsingh | 524
huangb | 524
mpraskav | 524
lvanleer | 524
mmillard | 524
linshedd | 524
mgunkel | 524
aeathava | 524
rbiars | 524
krblackw | 524
boltonb | 524
jcornn | 524
cdethlof | 524
reells | 524
lorhardi | 524
thommey | 524
ckhull | 524
bjules | 524
lklake | 524
rootk | 524
whmcmill | 524
eoverhau | 524
mrome | 524
as37 | 524
krlthoma | 524
jltyner | 524
mavest | 524
lcwelhan | 524
awismer | 524
confluence | 524
jawalsh | 524
hshewale | 524
polavara | 524
s517f07a | 524
ebiz | 524
lalfi | 524
vcob | 524
s602s07f | 524
yangfund | 524
tdbowman | 524
ofabilol | 524
s517s08a | 524
slis_assets | 524
clhoneyc | 524
bzflag | 524
caroltest | 524
citesrch | 524
vgangal | 524
skhowaji | 524
ofeda | 524
jatterbu | 524
s517s08b | 524
emakki | 524
test | 524
dingying | 524
walterc | 524
msinghi | 524
(301 rows)

Thank you for all your help.

Carol

On Sep 24, 2008, at 9:10 AM, Tom Lane wrote:

> Carol Walter <walterc(at)indiana(dot)edu> writes:
>> I tried creating the files 0000 through 002F. Pg_dump still will not
>> run. The error was as follows:
>
>> -bash-3.00$ pg_dump -U postgres ebiz > ebiz_bk.sql
>> pg_dump: SQL command failed
>> pg_dump: Error message from server: ERROR: could not access status
>> of transaction 20080015
>> DETAIL: Could not read from file "pg_clog/0013" at offset 32768:
>> Error 0.
>> pg_dump: The command was: COPY ebizd.products_categories
>> (category_id, product_id) TO stdout;
>
> You need to make the files the right size (256K of zeroes).
> A suitable "dd" from /dev/zero will accomplish this on modern
> Unixen (ie, anything that has /dev/zero).
>
> Note that this is by no means a fix, it simply allows pg_dump to
> complete. What you are really doing by filling those files with
> zeroes is saying "assume all these old transactions aborted".
> You *will* have data loss. It will only affect rows that haven't
> been accessed in a very long time (since at least June, looks like)
> but gone is gone.
>
> Another possibility that might be better is to fill the files with
> 0x55, though this is harder since /dev/zero won't help. That would
> force all the old transactions to be considered committed rather than
> aborted. This isn't really better from a consistency standpoint, but
> if you feel that most of your data-altering commands succeed then
> this might give you a closer approximation to the state you want.
>
> The whole thing is pretty troubling because 8.2.x is supposed to
> contain defenses against this type of problem. Could we see
> the contents of "select datname, datfrozenxid from pg_database"?
> Also, have you tried dumping individual databases instead of
> pg_dumpall? (It would be good to experiment with that before
> you start making bogus pg_clog files; once you do that there's
> no going back in terms of recovering the true state of your data.)
>
> regards, tom lane
>
> PS: Since you've evidently got a lot of rows that haven't been
> accessed in months, I conclude that you have not been running
> routine backups. Tut tut. I trust you'll remedy that oversight
> as soon as you get out of the immediate problem.

In response to

Responses

Browse pgsql-admin by date

  From Date Subject
Next Message Napolean Periathambi 2008-09-24 14:23:02 Postgres client Configuration
Previous Message Tom Lane 2008-09-24 13:10:21 Re: Missing pg_clog files