Re: spinlock contention

From: Florian Pflug <fgp(at)phlo(dot)org>
To: Robert Haas <robertmhaas(at)gmail(dot)com>
Cc: pgsql-hackers(at)postgresql(dot)org
Subject: Re: spinlock contention
Date: 2011-07-08 10:02:34
Message-ID: 651002C1-2EC1-4731-9B29-99217CB36653@phlo.org
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-hackers

On Jul7, 2011, at 18:09 , Robert Haas wrote:
> On Thu, Jul 7, 2011 at 5:54 AM, Florian Pflug <fgp(at)phlo(dot)org> wrote:
>> In effect, the resulting thing is an LWLock with a partitioned shared
>> counter. The partition one backend operates on for shared locks is
>> determined by its backend id.
>>
>> I've added the implementation to the lock benchmarking tool at
>> https://github.com/fgp/lockbench
>> and also pushed a patched version of postgres to
>> https://github.com/fgp/postgres/tree/lwlock_part
>>
>> The number of shared counter partitions is current 4, but can easily
>> be adjusted in lwlock.h. The code uses GCC's atomic fetch and add
>> intrinsic if available, otherwise it falls back to using a per-partition
>> spin lock.
>
> I think this is probably a good trade-off for locks that are most
> frequently taken in shared mode (like SInvalReadLock), but it seems
> like it could be a very bad trade-off for locks that are frequently
> taken in both shared and exclusive mode (e.g. ProcArrayLock,
> BufMappingLocks).

That's definitely a concern. One idea I had to alleviate that is to
add a per-partition "used" flag to the LWLock struct, and set that
to true (if not already set) before incrementing a partition's
shared counter. Exclusive lockers would then only need to inspect those
partitions for which the flag is set, and would clear all flags after
having acquires the lock.

I actually tried to do that initially, but figuring out where and how
to safely clear the flag again turned out to be a bit hairy. So I decided
to keep it simple first, and wait to see whether that problem manifests
itself in pratice.

> I don't want to fiddle with your git repo, but if you attach a patch
> that applies to the master branch I'll give it a spin if I have time.

Patch attached.

Beware that it needs at least GCC 4.1, otherwise it'll use a per-partition
spin lock instead of "locked xadd" to increment the shared counters.

best regards,
Florian Pflug

Attachment Content-Type Size
pg_lwlock_part.patch application/octet-stream 29.7 KB

In response to

Responses

Browse pgsql-hackers by date

  From Date Subject
Next Message Florian Pflug 2011-07-08 10:58:53 Re: Latch implementation that wakes on postmaster death on both win32 and Unix
Previous Message Peter Geoghegan 2011-07-08 09:57:20 Re: Latch implementation that wakes on postmaster death on both win32 and Unix