We've been a satified user of PostgreSQL for several years, and use it
to power a national pet adoption website: http://www.adoptapet.com/
Recently we've had a regularly-timed middle-of-the-night problem where
database handles are exhausted for a very brief period.
In tracking it down, I have found that the event seems to correspond to
a time when a cron script is deleting from a large logging table, but
I'm not certain if this is the cause or a correlation.
We are deleting about 5 million rows from a time-based logging table
that is replicated by Slony. We are currently using a single delete
statement, which takes about 15 minutes to run. There is no RI on the
table, but the use of Slony means that a trigger call and action is made
for every row deleted, which causes a corresponding insertion in another
table so the deletion can be replicated to the slave.
- Could this kind of activity lead to an upward spiral in database
- Would it be advisable to use several small DELETE statements instead,
to delete rows in batches of 1,000. We could use the recipe for this
that was posted earlier to this list:
delete from table where pk in
(select pk from table where delete_condition limit X);
Partitions seems attractive here, but aren't easy to use Slony. Perhaps
once we migrate to PostgreSQL 9.0 and the hot standby feature we can
Thanks for your help!
. . . . . . . . . . . . . . . . . . . . . . . . . . .
Mark Stosberg Principal Developer
mark(at)summersault(dot)com Summersault, LLC
765-939-9301 ext 202 database driven websites
. . . . . http://www.summersault.com/ . . . . . . . .
pgsql-performance by date
|Next:||From: Kenneth Marshall||Date: 2010-05-07 13:42:15|
|Subject: Re: debugging handle exhaustion and 15 min/ 5mil rowdelete|
|Previous:||From: Marcos Ortiz||Date: 2010-05-07 13:22:58|
|Subject: Re: debugging handle exhaustion and 15 min/ 5mil row delete|