Re: CPU hogged by concurrent SELECT..FOR UPDATE SKIP LOCKED

From: Michael Lewis <mlewis(at)entrata(dot)com>
To: Jim Jarvie <jim(at)talentstack(dot)to>
Cc: Pgsql Performance <pgsql-performance(at)lists(dot)postgresql(dot)org>
Subject: Re: CPU hogged by concurrent SELECT..FOR UPDATE SKIP LOCKED
Date: 2020-08-19 00:08:56
Message-ID: CAHOFxGodYysPhKRwybWksDdxQu0EGVMtgL1dGpLrdXxj382ttg@mail.gmail.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-performance

Message queue...
Are rows deleted? Are they updated once or many times? Have you adjusted
fillfactor on table or indexes? How many rows in the table currently or on
average? Is there any ordering to which rows you update?

It seems likely that one of the experts/code contributors will chime in and
explain about how locking that many rows in that many concurrent
connections means that some resource is overrun and so you are escalating
to a table lock instead of actually truly locking only the 250 rows you
wanted.

On the other hand, you say 80 cores and you are trying to increase the
number of concurrent processes well beyond that without (much) disk I/O
being involved. I wouldn't expect that to perform awesome.

Is there a chance to modify the code to permit each process to lock 1000
rows at a time and be content with 64 concurrent processes?

In response to

Responses

Browse pgsql-performance by date

  From Date Subject
Next Message Jim Jarvie 2020-08-19 00:21:38 Re: CPU hogged by concurrent SELECT..FOR UPDATE SKIP LOCKED
Previous Message Jim Jarvie 2020-08-18 23:52:56 CPU hogged by concurrent SELECT..FOR UPDATE SKIP LOCKED