Errors while vacuuming large tables

From: Jeff Boes <jboes(at)nexcerpt(dot)com>
To: pgsql-admin(at)postgresql(dot)org
Subject: Errors while vacuuming large tables
Date: 2002-10-14 15:13:46
Message-ID: aoemt6$1c9u$1@news.hub.org
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-admin

We expire rows by a datestamp from a few fairly large tables in our
schema (running 7.2.1).

Table A: 140 Krows, 600 MB
Table B: 100 Krows, 2.7 GB
Table C: 140 Krows, 2.7 GB
Table D: 3.2 Mrows, 500 MB

so that something like 15-20% of each table is deleted at a crack (done on
a weekend, of course). After the deletions, a VACUUM FULL is performed
on each of these tables. Recently, we get this message quite often on
table A:

ERROR: Parent tuple was not found

which I'm led to believe by things I've read here and elsewhere is caused
by a bug in PostgreSQL having to do with rows marked as read-locked or
something. I hope this gets repaired soon, because it's annoying not to
be able to recover the space automatically on this table.

But this weekend, we got a different set of errors:

ERROR: cannot open segment 1 of relation table_D (target
block 2337538109): No such file or directory

and for table B:

NOTICE: Child itemid in update-chain marked as unused - can't continue
repair_frag
ERROR: cannot open segment 3 of relation pg_toast_51207070 (target
block 2336096317): No such file or directory

What's the remedy to keep this from happening? We have an Apache
mod_perl installation running queries against these tables; could an open
read-only transaction cause problems like this?

--
Jeff Boes vox 616.226.9550 ext 24
Database Engineer fax 616.349.9076
Nexcerpt, Inc. http://www.nexcerpt.com
...Nexcerpt... Extend your Expertise

Responses

Browse pgsql-admin by date

  From Date Subject
Next Message Jeff 2002-10-14 15:47:47 Re: Multiple backends on a single physical database
Previous Message Tom Lane 2002-10-14 14:04:12 Re: Statistic collector too many fork