Re: how to avoid deadlock on masive update with multiples delete

From: Jeff Janes <jeff(dot)janes(at)gmail(dot)com>
To: Anibal David Acosta <aa(at)devshock(dot)com>
Cc: pgsql-performance(at)postgresql(dot)org
Subject: Re: how to avoid deadlock on masive update with multiples delete
Date: 2012-10-04 16:10:08
Message-ID: CAMkU=1x_tM6ujEY39FUk=Lj2=NoT+-vJzoFMtwuKPN8Jd5PqrQ@mail.gmail.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-performance

On Thu, Oct 4, 2012 at 7:01 AM, Anibal David Acosta <aa(at)devshock(dot)com> wrote:
> Hi,
>
> I have a table with about 10 millions of records, this table is update and
> inserted very often during the day (approx. 200 per second) , in the night
> the activity is a lot less, so in the first seconds of a day (00:00:01) a
> batch process update some columns (used like counters) of this table
> setting his value to 0.
>
>
>
> Yesterday, the first time it occurs, I got a deadlock when other process try
> to delete multiple (about 10 or 20) rows of the same table.
...
>
> Any ideas how to prevent this situation?

The bulk update could take an Exclusive (not Access Exclusive) lock.
Or the delete could perhaps be arranged to delete the records in ctid
order (although that might still deadlock). Or you could just repeat
the failed transaction.

Cheers,

Jeff

In response to

Responses

Browse pgsql-performance by date

  From Date Subject
Next Message Colin Taylor 2012-10-05 00:34:43 Re: A Tale of 2 algorithms
Previous Message Anibal David Acosta 2012-10-04 14:01:15 how to avoid deadlock on masive update with multiples delete