|From:||Andrey Borodin <x4mmm(at)yandex-team(dot)ru>|
|To:||Tomas Vondra <tomas(dot)vondra(at)2ndquadrant(dot)com>|
|Cc:||Alexander Korotkov <aekorotkov(at)gmail(dot)com>, Anastasia Lubennikova <a(dot)lubennikova(at)postgrespro(dot)ru>, Daniel Gustafsson <daniel(at)yesql(dot)se>, Kyotaro Horiguchi <horikyota(dot)ntt(at)gmail(dot)com>, pgsql-hackers <pgsql-hackers(at)postgresql(dot)org>|
|Subject:||Re: MultiXact\SLRU buffers configuration|
|Views:||Raw Message | Whole Thread | Download mbox | Resend email|
Tomas, thanks for looking into this!
> 28 окт. 2020 г., в 06:36, Tomas Vondra <tomas(dot)vondra(at)2ndquadrant(dot)com> написал(а):
> This thread started with a discussion about making the SLRU sizes
> configurable, but this patch version only adds a local cache. Does this
> achieve the same goal, or would we still gain something by having GUCs
> for the SLRUs?
> If we're claiming this improves performance, it'd be good to have some
> workload demonstrating that and measurements. I don't see anything like
> that in this thread, so it's a bit hand-wavy. Can someone share details
> of such workload (even synthetic one) and some basic measurements?
All patches in this thread aim at the same goal: improve performance in presence of MultiXact locks contention.
I could not build synthetical reproduction of the problem, however I did some MultiXact stressing here . It's a clumsy test program, because it still is not clear to me which parameters of workload trigger MultiXact locks contention. In generic case I was encountering other locks like *GenLock: XidGenLock, MultixactGenLock etc. Yet our production system encounters this problem approximately once in a month through this year.
Test program locks for share different set of tuples in presence of concurrent full scans.
To produce a set of locks we choose one of 14 bits. If a row number has this bit set to 0 we add lock it.
I've been measuring time to lock all rows 3 time for each of 14 bits, observing total time to set all locks.
During the test I was observing locks in pg_stat_activity, if they did not contain enough MultiXact locks I was tuning parameters further (number of concurrent clients, number of bits, select queries etc).
Why is it so complicated? It seems that other reproductions of a problem were encountering other locks.
Lets describe patches in this thread from the POV of these test.
*** Configurable SLRU buffers for MultiXact members and offsets.
From tests it is clear that high and low values for these buffers affect the test time. Here are time for a one test run with different offsets and members sizes 
Our production currently runs with (numbers are pages of buffers)
+#define NUM_MXACTOFFSET_BUFFERS 32
+#define NUM_MXACTMEMBER_BUFFERS 64
And, looking back to incidents in summer and fall 2020, seems like it helped mostly.
But it's hard to give some tuning advises based on test results. Values (32,64) produce 10% better result than current hardcoded values (8,16). In generic case this is not what someone should tune first.
*** Configurable caches of MultiXacts.
Tests were specifically designed to beat caches. So, according to test the bigger cache is - the more time it takes to accomplish the test .
Anyway cache is local for backend and it's purpose is deduplication of written MultiXacts, not enhancing reads.
*** Using advantage of SimpleLruReadPage_ReadOnly() in MultiXacts.
This simply aligns MultiXact with Subtransactions and CLOG. Other SLRUs already take advantage of reading SLRU with shared lock.
On synthetical tests without background selects this patch adds another ~4.7% of performance  against . This improvement seems consistent between different parameter values, yet within measurements deviation (see difference between warmup run  and closing run ).
All in all, these attempts to measure impact are hand-wavy too. But it makes sense to use consistent approach among similar subsystems (MultiXacts, Subtrans, CLOG etc).
*** Reduce sleep in GetMultiXactIdMembers() on standby.
The problem with pg_usleep(1000L) within GetMultiXactIdMembers() manifests on standbys during contention of MultiXactOffsetControlLock. It's even harder to reproduce.
Yet it seems obvious that reducing sleep to shorter time frame will make count of sleeping backend smaller.
For consistency I've returned patch with SLRU buffer configs to patchset (other patches are intact). But I'm mostly concerned about patches 1 and 3.
Best regards, Andrey Borodin.
|Next Message||Michael Paquier||2020-10-28 07:43:44||Re: [patch] Fix checksum verification in base backups for zero page headers|
|Previous Message||Tatsuro Yamada||2020-10-28 07:20:25||Re: list of extended statistics on psql|