> Hmm. Assuming that it is a corrupted-data issue, the only likely
> failure spot that I see in CopyTo() is the heap_getattr macro.
> A plausible theory is that the length word of a variable-length field
> (eg, text column) has gotten corrupted, so that when the code tries to
> access the next field beyond that, it calculates a pointer off the end
> of memory.
> You will probably find that plain SELECT will die too if it tries to
> extract data from the corrupted tuple or tuples. With judicious use of
> SELECT last-column ... LIMIT you might be able to narrow down which
> tuples are bad, and then dump out the disk block containing them (use
> the 'tid' pseudo-attribute to see which block a tuple is in). I'm not
> sure if the exercise will lead to anything useful or not, but if you
> want to pursue it...
I am wiling to spend some time to track this down. However I would prefer
to not keep crashing my live database. I would like to copy the raw data
files to a backup maching. Are there any catches in doing this. This
particular table is only updated at predictable times on the live system. I
am guessing as long as it is stable for at least a few minutes before I copy
the file it will work.
How hard would it be to write a utility that would walk a table looking this
kind of corruption? Are the on-disk data formats documented anywhere?
In response to
pgsql-general by date
|Next:||From: Tom Lane||Date: 2000-07-31 20:27:44|
|Subject: Re: Corrupted Table |
|Previous:||From: Tom Lane||Date: 2000-07-31 20:10:55|
|Subject: Re: Speed difference between != and = operators? |