From: | Robert Haas <robertmhaas(at)gmail(dot)com> |
---|---|
To: | Nils Goroll <slink(at)schokola(dot)de> |
Cc: | Jeff Janes <jeff(dot)janes(at)gmail(dot)com>, Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us>, Martijn van Oosterhout <kleptog(at)svana(dot)org>, Merlin Moncure <mmoncure(at)gmail(dot)com>, pgsql-hackers(at)postgresql(dot)org |
Subject: | Re: Update on the spinlock->pthread_mutex patch experimental: replace s_lock spinlock code with pthread_mutex on linux |
Date: | 2012-07-01 18:25:25 |
Message-ID: | CA+TgmoZyp_8QoOXnbDWDSgDq0faLqV=MAHTnipYuWd-xJPoxJQ@mail.gmail.com |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-hackers |
On Sun, Jul 1, 2012 at 11:13 AM, Nils Goroll <slink(at)schokola(dot)de> wrote:
> as this patch was not targeted towards increasing tps, I am at happy to hear
> that your benchmarks also suggest that performance is "comparable".
>
> But my main question is: how about resource consumption? For the issue I am
> working on, my current working hypothesis is that spinning on locks saturates
> resources and brings down overall performance in a high-contention situation.
>
> Do you have any getrusage figures or anything equivalent?
Spinlock contentions cause tps to go down. The fact that tps didn't
change much in this case suggests that either these workloads don't
generate enough spinlock contention to benefit from your patch, or
your patch doesn't meaningfully reduce it, or both. We might need a
test case that is more spinlock-bound to observe an effect.
--
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company
From | Date | Subject | |
---|---|---|---|
Next Message | Robert Haas | 2012-07-01 18:25:57 | Re: Update on the spinlock->pthread_mutex patch experimental: replace s_lock spinlock code with pthread_mutex on linux |
Previous Message | Robert Haas | 2012-07-01 17:20:42 | Re: XX000: enum value 117721 not found in cache for enum enumcrash |