Skip site navigation (1) Skip section navigation (2)

Re: spinlock->pthread_mutex : first results with Jeff's pgbench+plsql

From: Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us>
To: Nils Goroll <slink(at)schokola(dot)de>
Cc: Jeff Janes <jeff(dot)janes(at)gmail(dot)com>, Robert Haas <robertmhaas(at)gmail(dot)com>, Martijn van Oosterhout <kleptog(at)svana(dot)org>, Merlin Moncure <mmoncure(at)gmail(dot)com>, pgsql-hackers(at)postgresql(dot)org
Subject: Re: spinlock->pthread_mutex : first results with Jeff's pgbench+plsql
Date: 2012-07-02 16:20:02
Message-ID: (view raw, whole thread or download thread mbox)
Lists: pgsql-hackers
Nils Goroll <slink(at)schokola(dot)de> writes:
> How I read this under the assumption that the test was correct and valid _and_
> can be reproduced independently:

> * for very low concurrency, the existing spinlock implementation is ideal -
>   we can't do any better both in terms of resulting sps and resource
>   consumption.

>   One path to explore here would be PTHREAD_MUTEX_ADAPTIVE_NP, which essentially
>   is the same as a spinlock for contended case with very low lock aquisition
>   time. The code which I have tested uses PTHREAD_MUTEX_NORMAL, which, on Linux,
>   will always syscall for the contended case.

>   Quite clearly the overhead is with futexes syscalling, because kernel
>   resource consumption is 3x higher with the patch than without.

> * With this benchmark, for "half" concurrency in the order of 0.5 x #cores,
>   spinlocks still yield better tps, but resource overhead for spinlocks starts
>   to take off and futexes are already 40% more efficient, despite the fact that
>   spinlocks still have a 25% advantage in terms of sps.

> * At "full" concurrency (64 threads on 64 cores), resource consumption of
>   the spinlocks leads to almost doubled overall resource consumption and
>   the increased efficiency starts to pay off in terms of sps

> * and for the "quadruple overloaded" case (2x128 threads on 64 cores), spinlock
>   contention really brings the system down and sps drops to half.

These conclusions seem plausible, though I agree we'd want to reproduce
similar behavior elsewhere before acting on the results.

What this seems to me to show, though, is that pthread mutexes are not
fundamentally a better technology than what we have now in spinlocks.
The problem is that the spinlock code is not adapting well to very high
levels of contention.  I wonder whether a better and less invasive fix
could be had by playing with the rules for adjustment of
spins_per_delay.  Right now, those are coded without any thought about
high-contention cases.  In particular I wonder whether we ought to
try to determine which individual locks are high-contention, and behave
differently when trying to acquire those.

			regards, tom lane

In response to


pgsql-hackers by date

Next:From: Robert HaasDate: 2012-07-02 16:56:04
Subject: Re: spinlock->pthread_mutex : first results with Jeff's pgbench+plsql
Previous:From: Robert HaasDate: 2012-07-02 16:12:49
Subject: Re: Patch: add conversion from pg_wchar to multibyte

Privacy Policy | About PostgreSQL
Copyright © 1996-2018 The PostgreSQL Global Development Group