|From:||Tomas Vondra <tomas(dot)vondra(at)2ndquadrant(dot)com>|
|To:||Andrey Borodin <x4mmm(at)yandex-team(dot)ru>|
|Cc:||Alexander Korotkov <aekorotkov(at)gmail(dot)com>, Anastasia Lubennikova <a(dot)lubennikova(at)postgrespro(dot)ru>, Daniel Gustafsson <daniel(at)yesql(dot)se>, Kyotaro Horiguchi <horikyota(dot)ntt(at)gmail(dot)com>, pgsql-hackers <pgsql-hackers(at)postgresql(dot)org>|
|Subject:||Re: MultiXact\SLRU buffers configuration|
|Views:||Raw Message | Whole Thread | Download mbox | Resend email|
On Thu, Oct 29, 2020 at 12:08:21PM +0500, Andrey Borodin wrote:
>> 29 окт. 2020 г., в 04:32, Tomas Vondra <tomas(dot)vondra(at)2ndquadrant(dot)com> написал(а):
>> It's not my intention to be mean or anything like that, but to me this
>> means we don't really understand the problem we're trying to solve. Had
>> we understood it, we should be able to construct a workload reproducing
>> the issue ...
>> I understand what the individual patches are doing, and maybe those
>> changes are desirable in general. But without any benchmarks from a
>> plausible workload I find it hard to convince myself that:
>> (a) it actually will help with the issue you're observing on production
>> (b) it's actually worth the extra complexity (e.g. the lwlock changes)
>> I'm willing to invest some of my time into reviewing/testing this, but I
>> think we badly need better insight into the issue, so that we can build
>> a workload reproducing it. Perhaps collecting some perf profiles and a
>> sample of the queries might help, but I assume you already tried that.
>Thanks, Tomas! This totally makes sense.
>Indeed, collecting queries did not help yet. We have loadtest environment equivalent to production (but with 10x less shards), copy of production workload queries. But the problem does not manifest there.
>Why do I think the problem is in MultiXacts?
>Here is a chart with number of wait events on each host
>During the problem MultiXactOffsetControlLock and SLRURead dominate all other lock types. After primary switchover to another node SLRURead continued for a bit there, then disappeared.
OK, so most of this seems to be due to SLRURead and
MultiXactOffsetControlLock. Could it be that there were too many
multixact members, triggering autovacuum to prevent multixact
wraparound? That might generate a lot of IO on the SLRU. Are you
monitoring the size of the pg_multixact directory?
>Backtraces on standbys during the problem show that most of backends are sleeping in pg_sleep(1000L) and are not included into wait stats on these charts.
>Currently I'm considering writing test that directly calls MultiXactIdExpand(), MultiXactIdCreate(), and GetMultiXactIdMembers() from an extension. How do you think, would benchmarks in such tests be meaningful?
I don't know. I'd much rather have a SQL-level benchmark than an
extension doing this kind of stuff.
Tomas Vondra http://www.2ndQuadrant.com
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services
|Next Message||Stephen Frost||2020-10-29 13:56:41||Re: Log message for GSS connection is missing once connection authorization is successful.|
|Previous Message||Konstantin Knizhnik||2020-10-29 13:45:58||Re: libpq compression|