VACUUM ANALYZE block the whole database

From: Majid Azimi <majid(dot)merkava(at)gmail(dot)com>
To: pgsql-admin(at)postgresql(dot)org
Subject: VACUUM ANALYZE block the whole database
Date: 2012-07-30 05:42:17
Message-ID: CADOhCmtq0QbrAxnYqUaVk1WJ93zis0QmUPSVj=PMhdrxpGcjhA@mail.gmail.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-admin

hi guys,

we have database containing 3 three tables.
1. some static tables with 300 records.
2. vbv
3. vbv_denorm
4. summary

here is how stuff works:
1. one process is receiving data from our sensor network and inserts it to
vbv
2. there is trigger on vbv which add the new records with some other
information into vbv_denorm --> we use this for our reporting software
3. there is trigger on vbv_denorm which aggregates the data in vbv_denorm
and inserts into summary
4. another process is sending summary records to other company in periodic
bases. it sends records one by one. (I me it opens a cursor and send one
record. after confirmation sends another record)

this operation runs 24*7.

because the database is growing too much(beyond tra byte in less than 6
months) we need to delete the oldest data. in each delete we remove about
80 million records from vbv(there is a trigger to delete the records from
vbv_denorm and summary). after delete autovacuum starts on vbv. at the same
time because summary is small (at maximum 2 million records) I ran VACUUM
ANALYZE on it(it is not VACUUM FULL). but this cause the database to
completely block. the process inserting to vbv blocks. the process sending
summary records blocks. Why this is happening? autovacuum and VACUUM
ANALYZE should not lock tables.

Responses

Browse pgsql-admin by date

  From Date Subject
Next Message Amit Kumar 2012-07-30 11:06:54 Sometime Update is not modifying data inside database.
Previous Message Joshua D. Drake 2012-07-27 18:11:34 Re: db size growing out of control when using clustered Jackrabbit