Re: [HACKERS] Moving relation extension locks out of heavyweight lock manager

From: Andres Freund <andres(at)anarazel(dot)de>
To: Alexander Korotkov <a(dot)korotkov(at)postgrespro(dot)ru>
Cc: konstantin knizhnik <k(dot)knizhnik(at)postgrespro(dot)ru>, PostgreSQL Hackers <pgsql-hackers(at)lists(dot)postgresql(dot)org>
Subject: Re: [HACKERS] Moving relation extension locks out of heavyweight lock manager
Date: 2018-06-05 13:02:35
Message-ID: 20180605130235.m53dgtgorjpvkdmr@alap3.anarazel.de
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-hackers

On 2018-06-05 13:09:08 +0300, Alexander Korotkov wrote:
> On Tue, Jun 5, 2018 at 12:48 PM Konstantin Knizhnik
> <k(dot)knizhnik(at)postgrespro(dot)ru> wrote:
> > Workload is combination of inserts and selects.
> > Looks like shared locks obtained by select cause starvation of inserts, trying to get exclusive relation extension lock.
> > The problem is fixed by fair lwlock patch, implemented by Alexander Korotkov. This patch prevents granting of shared lock if wait queue is not empty.
> > May be we should use this patch or find some other way to prevent starvation of writers on relation extension locks for such workloads.
>
> Fair lwlock patch really fixed starvation of exclusive lwlock waiters.
> But that starvation happens not on relation extension lock – selects
> don't get shared relation extension lock. The real issue there was
> not relation extension lock itself, but the time spent inside this
> lock.

Yea, that makes a lot more sense to me.

> It appears that buffer replacement happening inside relation
> extension lock is affected by starvation on exclusive buffer mapping
> lwlocks and buffer content lwlocks, caused by many concurrent shared
> lockers. So, fair lwlock patch have no direct influence to relation
> extension lock, which is naturally not even lwlock...

Yea, that makes sense. I wonder how much the fix here is to "pre-clear"
a victim buffer, and how much is a saner buffer replacement
implementation (either by going away from O(NBuffers), or by having a
queue of clean victim buffers like my bgwriter replacement).

> I'll post fair lwlock path in a separate thread. It requires detailed
> consideration and benchmarking, because there is a risk of regression
> on specific workloads.

I bet that doing it naively will regress massively in a number of cases.

Greetings,

Andres Freund

In response to

Responses

Browse pgsql-hackers by date

  From Date Subject
Next Message David Rowley 2018-06-05 13:06:39 Re: Spilling hashed SetOps and aggregates to disk
Previous Message Andres Freund 2018-06-05 13:00:34 Re: plans for PostgreSQL 12