I'm executing 30,000 single delete statements in one transaction.
At this point i'm looking into combining the multiple deletes into one statement and breaking my big transaction into smaller ones of about 100 deletes or so.
On 01/06/2011, at 11:40 AM, Craig Ringer wrote:
> On 1/06/2011 7:11 AM, Pierre C wrote:
>>> If i run 30,000 prepared "DELETE FROM xxx WHERE "ID" = ?" commands it
>>> takes close to 10 minutes.
>> Do you run those in a single transaction or do you use one transaction
>> per DELETE ?
>> In the latter case, postgres will ensure each transaction is commited to
>> disk, at each commit. Since this involves waiting for the physical I/O
>> to happen, it is slow. If you do it 30.000 times, it will be 30.000
>> times slow.
> Not only that, but if you're doing it via some application the app has to wait for Pg to respond before it can send the next query. This adds even more delay, as do all the processor switches between Pg and your application.
> If you really must issue individual DELETE commands one-by-one, I *think* you can use synchronous_commit=off or
> SET LOCAL synchronous_commit TO OFF;
> Craig Ringer
> Tech-related writing at http://soapyfrogs.blogspot.com/
In response to
pgsql-performance by date
|Next:||From: Robert Haas||Date: 2011-06-01 02:58:16|
|Subject: Re: [PERFORM] Hash Anti Join performance degradation|
|Previous:||From: Craig Ringer||Date: 2011-06-01 01:40:52|
|Subject: Re: Delete performance|