On Fri, May 07, 2010 at 09:37:42AM -0400, Mark Stosberg wrote:
> We've been a satified user of PostgreSQL for several years, and use it
> to power a national pet adoption website: http://www.adoptapet.com/
> Recently we've had a regularly-timed middle-of-the-night problem where
> database handles are exhausted for a very brief period.
> In tracking it down, I have found that the event seems to correspond to
> a time when a cron script is deleting from a large logging table, but
> I'm not certain if this is the cause or a correlation.
> We are deleting about 5 million rows from a time-based logging table
> that is replicated by Slony. We are currently using a single delete
> statement, which takes about 15 minutes to run. There is no RI on the
> table, but the use of Slony means that a trigger call and action is made
> for every row deleted, which causes a corresponding insertion in another
> table so the deletion can be replicated to the slave.
> My questions:
> - Could this kind of activity lead to an upward spiral in database
> handle usage?
> - Would it be advisable to use several small DELETE statements instead,
> to delete rows in batches of 1,000. We could use the recipe for this
> that was posted earlier to this list:
Yes, that is the method we use in several cases to avoid this behavior.
Deletion is a more intensive process in PostgreSQL, so batching it will
keep from dragging down other queries which results in your out-of-handles
In response to
pgsql-performance by date
|Next:||From: Mark Stosberg||Date: 2010-05-07 14:10:31|
|Subject: Re: debugging handle exhaustion and 15 min/ 5mil row delete|
|Previous:||From: Mark Stosberg||Date: 2010-05-07 13:37:42|
|Subject: debugging handle exhaustion and 15 min/ 5mil row delete|