From: | Alex Krohn <alex(at)gossamer-threads(dot)com> |
---|---|
To: | pgsql-general(at)postgresql(dot)org |
Subject: | Re: unable to repair table: missing chunk number |
Date: | 2002-04-19 22:12:56 |
Message-ID: | 20020419151252.3C03.ALEX@gossamer-threads.com |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-general |
Hi Tom,
> A brute-force way to narrow things down would be to write a little
> program that tries to retrieve each row individually by primary key,
> starting at 115848 since you know the rows before that are okay.
Thanks, this worked. I ran a perl script that went from 1 to
max(primary_id), and selected a record and then inserted it into a new
table.
There were a total of two bad records, so not too bad. =)
> That's disturbing; short of a serious failure (disk crash, for instance)
> I don't know of anything that would cause this.
>
> One thing that would be interesting to try is to investigate the TOAST
> table directly.
# select oid from pg_class where relname = 'users';
oid
---------
9361620
(1 row)
# select chunk_seq, length(chunk_data) from pg_toast_9361620 where
chunk_id = 12851102 order by chunk_seq;
chunk_seq | length
-----------+--------
(0 rows)
Very strange.
Now that we can backup the data, we've switched the database to a brand
new disk drive, and re-imported and vacuumed everything. The application
is running smoothly again.
I doubt this is relevant, but we were symlinking /usr/local/pgsql/data
-> /mnt/disk2/pgsql. Also, one column in the problem table was a text
field avg'ing 20k.
I still have the old database if it helps.
Thanks for all your help,
Alex
From | Date | Subject | |
---|---|---|---|
Next Message | Martijn van Oosterhout | 2002-04-20 00:01:40 | Re: INSERT & UPDATE |
Previous Message | Cornelia Boenigk | 2002-04-19 22:05:55 | Re: creating table w/ php help |