we have database containing 3 three tables.
1. some static tables with 300 records.
here is how stuff works:
1. one process is receiving data from our sensor network and inserts it to
2. there is trigger on vbv which add the new records with some other
information into vbv_denorm --> we use this for our reporting software
3. there is trigger on vbv_denorm which aggregates the data in vbv_denorm
and inserts into summary
4. another process is sending summary records to other company in periodic
bases. it sends records one by one. (I me it opens a cursor and send one
record. after confirmation sends another record)
this operation runs 24*7.
because the database is growing too much(beyond tra byte in less than 6
months) we need to delete the oldest data. in each delete we remove about
80 million records from vbv(there is a trigger to delete the records from
vbv_denorm and summary). after delete autovacuum starts on vbv. at the same
time because summary is small (at maximum 2 million records) I ran VACUUM
ANALYZE on it(it is not VACUUM FULL). but this cause the database to
completely block. the process inserting to vbv blocks. the process sending
summary records blocks. Why this is happening? autovacuum and VACUUM
ANALYZE should not lock tables.
pgsql-admin by date
|Next:||From: Amit Kumar||Date: 2012-07-30 11:06:54|
|Subject: Sometime Update is not modifying data inside database.|
|Previous:||From: Joshua D. Drake||Date: 2012-07-27 18:11:34|
|Subject: Re: db size growing out of control when using clustered Jackrabbit|