Skip site navigation (1) Skip section navigation (2)

Minimizing dead tuples caused by update triggers

From: Dan Harris <fbsd(at)drivefaster(dot)net>
To: pgsql-performance(at)postgresql(dot)org
Subject: Minimizing dead tuples caused by update triggers
Date: 2007-12-19 23:38:25
Message-ID: 624895BA-B4D0-470B-80FC-4AE55C4FD85E@drivefaster.net (view raw or flat)
Thread:
Lists: pgsql-performance
I've been fighting with the common workarounds for inadequate response  
times on select count(*) and min(),max() on tables with tens of  
millions of rows for quite a while now and understand the reasons for  
the table scans.

I have applications that regularly poll a table ( ideally, the more  
frequent, the better ) to learn the most recent data inside it as well  
as the # of rows in it ( among a few other things ).  As the databases  
have grown in size, these summarizations could no longer be done on  
the fly, so I wrote a database wrapper API that tracks those values  
internally.

This wrapper has grown very complex and is difficult to manage across  
different systems.  What I'd like to do instead is implement triggers  
for insert, updates, and deletes to check and/or replace a value in a  
"table_stats", representing table count, min/max dates, and a few  
other costly operations.. that can then be queried in short order.  I  
know this is a fairly common thing to do.

The thing that concerns me is dead tuples on the table_stats table.  I  
believe that every insert of new data in one of the monitored tables  
will result in an UPDATE of the table_stats table.  When thousands  
( or millions ) of rows are inserted, the select performance ( even  
trying with an index ) on table_stats slows down in a hurry.  If I  
wrap the inserts into large transactions, will it only call the update  
on table_states when I commit?

Obviously I want to vacuum this table regularly to recover this.  The  
problem I'm running into is contention between VACUUM ( not full ) and  
pg_dump ( version 8.0.12 ).  My system backups takes 6 hours to run  
pg_dump on a 400GB cluster directory.  If the vacuum command fires  
during the dump, it forces an exclusive lock and any queries will hang  
until pg_dump finishes.

If I have to wait until pg_dump is finished before issuing the VACUUM  
command, everything slows down significantly as the dead tuples in  
table_stats pile up.

What strategy could I employ to either:

1. resolve the contention between pg_dump and vacuum, or
2. reduce the dead tuple pile up between vacuums

Thanks for reading

-Dan

Responses

pgsql-performance by date

Next:From: Tom LaneDate: 2007-12-20 00:39:45
Subject: Re: Minimizing dead tuples caused by update triggers
Previous:From: Gregory StarkDate: 2007-12-19 22:54:13
Subject: Re: Dual core Opterons beating quad core Xeons?

Privacy Policy | About PostgreSQL
Copyright © 1996-2014 The PostgreSQL Global Development Group