|From:||Andres Freund <andres(at)anarazel(dot)de>|
|To:||pgsql-hackers(at)lists(dot)postgresql(dot)org,Alvaro Herrera <alvherre(at)2ndquadrant(dot)com>,Pg Hackers <pgsql-hackers(at)lists(dot)postgresql(dot)org>|
|Cc:||Jaime Casanova <jaime(dot)casanova(at)2ndQuadrant(dot)com>|
|Subject:||Re: LogwrtResult contended spinlock|
|Views:||Raw Message | Whole Thread | Download mbox | Resend email|
On August 31, 2020 11:21:56 AM PDT, Alvaro Herrera <alvherre(at)2ndquadrant(dot)com> wrote:
>Jaime Casanova recently reported a situation where pglogical
>from 64 POS sites to a single central (64-core) node, each with two
>replication sets, causes XLog's info_lck to become highly contended
>because of frequently reading LogwrtResult. We tested the simple fix
>adding a new LWLock that protects LogwrtResult and LogwrtRqst; that
>seems to solve the problem easily enough.
>At first I wanted to make the new LWLock cover only LogwrtResult
>and leave LogwrtRqst alone. However on doing it, it seemed that that
>might change the locking protocol in a nontrivial way. So I decided to
>make it cover both and call it a day. We did verify that the patch
>solves the reported problem, at any rate.
Wouldn't the better fix here be to allow reading of individual members without a lock? E.g. by wrapping each in a 64bit atomic.
Sent from my Android device with K-9 Mail. Please excuse my brevity.
|Next Message||Pavel Stehule||2020-08-31 18:29:47||Re: Get memory contexts of an arbitrary backend process|
|Previous Message||Alvaro Herrera||2020-08-31 18:21:56||LogwrtResult contended spinlock|