From: | Alexander Korotkov <a(dot)korotkov(at)postgrespro(dot)ru> |
---|---|
To: | Tomas Vondra <tomas(dot)vondra(at)2ndquadrant(dot)com> |
Cc: | Bernd Helmle <mailings(at)oopsware(dot)de>, pgsql-hackers <pgsql-hackers(at)postgresql(dot)org> |
Subject: | Re: LWLock optimization for multicore Power machines |
Date: | 2017-02-14 12:53:37 |
Message-ID: | CAPpHfdtouGMnTjh5YKwYxFm2O=u8x+Spn1_H1jd+m6G-ZvPqrw@mail.gmail.com |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-hackers |
On Mon, Feb 13, 2017 at 10:17 PM, Tomas Vondra <tomas(dot)vondra(at)2ndquadrant(dot)com
> wrote:
> On 02/13/2017 03:16 PM, Bernd Helmle wrote:
>
>> Am Samstag, den 11.02.2017, 15:42 +0300 schrieb Alexander Korotkov:
>>
>>> Thus, I see reasons why in your tests absolute results are lower than
>>> in my
>>> previous tests.
>>> 1. You use 28 physical cores while I was using 32 physical cores.
>>> 2. You run tests in PowerVM while I was running test on bare metal.
>>> PowerVM could have some overhead.
>>> 3. I guess you run pgbench on the same machine. While in my tests
>>> pgbench
>>> was running on another node of IBM E880.
>>>
>>>
>> Yeah, pgbench was running locally. Maybe we can get some resources to
>> run them remotely. Interesting side note: If you run a second postgres
>> instance with the same pgbench in parallel, you get nearly the same
>> transaction throughput as a single instance.
>>
>> Short side note:
>>
>> If you run two Postgres instances concurrently with the same pgbench
>> parameters, you get nearly the same transaction throughput for both
>> instances each as when running against a single instance, e.g.
>>
>>
> That strongly suggests you're hitting some kind of lock. It'd be good to
> know which one. I see you're doing "pgbench -S" which also updates branches
> and other tiny tables - it's possible the sessions are trying to update the
> same row in those tiny tables. You're running with scale 1000, but with 100
> it's still possible thanks to the birthday paradox.
>
> Otherwise it might be interesting to look at sampling wait events, which
> might tell us more about the locks.
>
+1
And you could try to use pg_wait_sampling
<https://github.com/postgrespro/pg_wait_sampling> to sampling of wait
events.
------
Alexander Korotkov
Postgres Professional: http://www.postgrespro.com
The Russian Postgres Company
From | Date | Subject | |
---|---|---|---|
Next Message | Robert Haas | 2017-02-14 12:56:30 | Re: pg_waldump's inclusion of backend headers is a mess |
Previous Message | Robert Haas | 2017-02-14 12:53:23 | Re: pg_waldump's inclusion of backend headers is a mess |