today i had 2 corrupted tables after nearly 2 month without any problems.
i excluded the damaged records (1 in each table) and rebuild both tables.
one of the tables is used as some kind of "first in last out" queue to store
continous data for a time slot about 1 day which is about 80.000 records, so
the content of the table changes daily.
another table queues data for 3 month which is about 2.000.000 records.
currently i do a vacuum at night to keep the database clean and accesstimes
usable. i dont have a clue of where the corruption came from.
is it mandatory for the tables integrity to run "analyze" periodically?
the sytem is a AMD Athlon, 1 gb memory running Red Hat Linux 7.1
pg version is 7.2-1PGDG
excerpt from postgresql.conf:
max_fsm_relations = 100
max_fsm_pages = 2000
sort_mem = 128
vacuum_mem = 8192
perl v5.6.1 with DBI::version "1.18"
any ideas how to track down the cause for corruptions like this?
any problems known with the pg version?
In response to
pgsql-novice by date
|Next:||From: Denny Permana||Date: 2002-09-20 16:08:16|
|Subject: Unsubscribe Me Please|
|Previous:||From: Rory Campbell-Lange||Date: 2002-09-19 17:09:35|
|Subject: Re: Make an id field max(id)+1 rather than SERIAL|