Re: Delete performance

From: Jarrod Chesney <jarrod(dot)chesney(at)gmail(dot)com>
To: Craig Ringer <craig(at)postnewspapers(dot)com(dot)au>
Cc: Pierre C <lists(at)peufeu(dot)com>, pgsql-performance(at)postgresql(dot)org
Subject: Re: Delete performance
Date: 2011-06-01 01:45:04
Message-ID: 154D1F3C-0605-434C-BA1D-246CF257DB32@gmail.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-performance

I'm executing 30,000 single delete statements in one transaction.

At this point i'm looking into combining the multiple deletes into one statement and breaking my big transaction into smaller ones of about 100 deletes or so.

On 01/06/2011, at 11:40 AM, Craig Ringer wrote:

> On 1/06/2011 7:11 AM, Pierre C wrote:
>>> If i run 30,000 prepared "DELETE FROM xxx WHERE "ID" = ?" commands it
>>> takes close to 10 minutes.
>>
>> Do you run those in a single transaction or do you use one transaction
>> per DELETE ?
>>
>> In the latter case, postgres will ensure each transaction is commited to
>> disk, at each commit. Since this involves waiting for the physical I/O
>> to happen, it is slow. If you do it 30.000 times, it will be 30.000
>> times slow.
>
> Not only that, but if you're doing it via some application the app has to wait for Pg to respond before it can send the next query. This adds even more delay, as do all the processor switches between Pg and your application.
>
> If you really must issue individual DELETE commands one-by-one, I *think* you can use synchronous_commit=off or
>
> SET LOCAL synchronous_commit TO OFF;
>
> See:
>
> http://www.postgresql.org/docs/current/static/runtime-config-wal.html
>
>
> --
> Craig Ringer
>
> Tech-related writing at http://soapyfrogs.blogspot.com/

In response to

Responses

Browse pgsql-performance by date

  From Date Subject
Next Message Robert Haas 2011-06-01 02:58:16 Re: [PERFORM] Hash Anti Join performance degradation
Previous Message Craig Ringer 2011-06-01 01:40:52 Re: Delete performance