From: | Andres Freund <andres(at)anarazel(dot)de> |
---|---|
To: | pgsql-hackers(at)lists(dot)postgresql(dot)org,Alvaro Herrera <alvherre(at)2ndquadrant(dot)com>,Pg Hackers <pgsql-hackers(at)lists(dot)postgresql(dot)org> |
Cc: | Jaime Casanova <jaime(dot)casanova(at)2ndQuadrant(dot)com> |
Subject: | Re: LogwrtResult contended spinlock |
Date: | 2020-08-31 18:29:38 |
Message-ID: | CE5FB7B4-BD5B-40C1-A915-0AE770812C3D@anarazel.de |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-hackers |
Hi,
On August 31, 2020 11:21:56 AM PDT, Alvaro Herrera <alvherre(at)2ndquadrant(dot)com> wrote:
>Jaime Casanova recently reported a situation where pglogical
>replicating
>from 64 POS sites to a single central (64-core) node, each with two
>replication sets, causes XLog's info_lck to become highly contended
>because of frequently reading LogwrtResult. We tested the simple fix
>of
>adding a new LWLock that protects LogwrtResult and LogwrtRqst; that
>seems to solve the problem easily enough.
>
>At first I wanted to make the new LWLock cover only LogwrtResult
>proper,
>and leave LogwrtRqst alone. However on doing it, it seemed that that
>might change the locking protocol in a nontrivial way. So I decided to
>make it cover both and call it a day. We did verify that the patch
>solves the reported problem, at any rate.
Wouldn't the better fix here be to allow reading of individual members without a lock? E.g. by wrapping each in a 64bit atomic.
Andres
--
Sent from my Android device with K-9 Mail. Please excuse my brevity.
From | Date | Subject | |
---|---|---|---|
Next Message | Pavel Stehule | 2020-08-31 18:29:47 | Re: Get memory contexts of an arbitrary backend process |
Previous Message | Alvaro Herrera | 2020-08-31 18:21:56 | LogwrtResult contended spinlock |