I will try this. Is there a particular size these files need to be?
Then I can try the pg_dumpall. That's, of course, my main concern. I
need to get the data out.
By the way, before I went to panic mode, I tried pg_dumpall and
reindexing the database. Everything gets the same error, perhaps with
a different file name.
Quoting Scott Marlowe <scott(dot)marlowe(at)gmail(dot)com>:
> On Tue, Sep 23, 2008 at 10:16 PM, Tena Sakai <tsakai(at)gallo(dot)ucsf(dot)edu> wrote:
>> Hi Carol,
>> I detect in you some apprehension as to pg_dumpall
>> won't run or complete. Why is that? Have you already
>> done it and it didn't work? If that's not the case,
>> why not run pg_dumpall at a quiet hour and see?
>> I think Scott is right as to install the latest
>> 8.2 on top. It won't be time consuming task.
>> Why not give it a wheel? It would be good to
>> find out one way or the other.
>> Scott: Are files 0000 through 002F (which are
>> not there) absolutely necessary for recovering data?
> Most likely not. If the db won't start up without them, it might be
> possible to create new clog files that are nothing but zeroes. Never
> been in this position though...
> Sent via pgsql-admin mailing list (pgsql-admin(at)postgresql(dot)org)
> To make changes to your subscription:
In response to
pgsql-admin by date
|Next:||From: Carol Walter||Date: 2008-09-24 12:42:24|
|Subject: Re: Missing pg_clog files|
|Previous:||From: Jonny Rabovsky||Date: 2008-09-24 04:51:09|
|Subject: Re: Fat16/32 linking problems|