Re: MultiXact\SLRU buffers configuration

Поиск
Список
Период
Сортировка
От Andrey Borodin
Тема Re: MultiXact\SLRU buffers configuration
Дата
Msg-id 13D8FD63-559A-4737-B7FD-05288D1CEF8B@yandex-team.ru
обсуждение исходный текст
Ответ на Re: MultiXact\SLRU buffers configuration  (Tomas Vondra <tomas.vondra@2ndquadrant.com>)
Ответы Re: MultiXact\SLRU buffers configuration  (Tomas Vondra <tomas.vondra@2ndquadrant.com>)
Список pgsql-hackers
Tomas, thanks for looking into this!

> 28 окт. 2020 г., в 06:36, Tomas Vondra <tomas.vondra@2ndquadrant.com> написал(а):
>
>
> This thread started with a discussion about making the SLRU sizes
> configurable, but this patch version only adds a local cache. Does this
> achieve the same goal, or would we still gain something by having GUCs
> for the SLRUs?
>
> If we're claiming this improves performance, it'd be good to have some
> workload demonstrating that and measurements. I don't see anything like
> that in this thread, so it's a bit hand-wavy. Can someone share details
> of such workload (even synthetic one) and some basic measurements?

All patches in this thread aim at the same goal: improve performance in presence of MultiXact locks contention.
I could not build synthetical reproduction of the problem, however I did some MultiXact stressing here [0]. It's a
clumsytest program, because it still is not clear to me which parameters of workload trigger MultiXact locks
contention.In generic case I was encountering other locks like *GenLock: XidGenLock, MultixactGenLock etc. Yet our
productionsystem encounters this problem approximately once in a month through this year. 

Test program locks for share different set of tuples in presence of concurrent full scans.
To produce a set of locks we choose one of 14 bits. If a row number has this bit set to 0 we add lock it.
I've been measuring time to lock all rows 3 time for each of 14 bits, observing total time to set all locks.
During the test I was observing locks in pg_stat_activity, if they did not contain enough MultiXact locks I was tuning
parametersfurther (number of concurrent clients, number of bits, select queries etc). 

Why is it so complicated? It seems that other reproductions of a problem were encountering other locks.

Lets describe patches in this thread from the POV of these test.

*** Configurable SLRU buffers for MultiXact members and offsets.
From tests it is clear that high and low values for these buffers affect the test time. Here are time for a one test
runwith different offsets and members sizes [1] 
Our production currently runs with (numbers are pages of buffers)
+#define NUM_MXACTOFFSET_BUFFERS 32
+#define NUM_MXACTMEMBER_BUFFERS 64
And, looking back to incidents in summer and fall 2020, seems like it helped mostly.

But it's hard to give some tuning advises based on test results. Values (32,64) produce 10% better result than current
hardcodedvalues (8,16). In generic case this is not what someone should tune first. 

*** Configurable caches of MultiXacts.
Tests were specifically designed to beat caches. So, according to test the bigger cache is - the more time it takes to
accomplishthe test [2]. 
Anyway cache is local for backend and it's purpose is deduplication of written MultiXacts, not enhancing reads.

*** Using advantage of SimpleLruReadPage_ReadOnly() in MultiXacts.
This simply aligns MultiXact with Subtransactions and CLOG. Other SLRUs already take advantage of reading SLRU with
sharedlock. 
On synthetical tests without background selects this patch adds another ~4.7% of performance [3] against [4]. This
improvementseems consistent between different parameter values, yet within measurements deviation (see difference
betweenwarmup run [5] and closing run [6]). 
All in all, these attempts to measure impact are hand-wavy too. But it makes sense to use consistent approach among
similarsubsystems (MultiXacts, Subtrans, CLOG etc). 

*** Reduce sleep in GetMultiXactIdMembers() on standby.
The problem with pg_usleep(1000L) within GetMultiXactIdMembers() manifests on standbys during contention of
MultiXactOffsetControlLock.It's even harder to reproduce. 
Yet it seems obvious that reducing sleep to shorter time frame will make count of sleeping backend smaller.

For consistency I've returned patch with SLRU buffer configs to patchset (other patches are intact). But I'm mostly
concernedabout patches 1 and 3. 

Thanks!

Best regards, Andrey Borodin.

[0] https://github.com/x4m/multixact_stress
[1] https://github.com/x4m/multixact_stress/blob/master/testresults.txt#L22-L39
[2] https://github.com/x4m/multixact_stress/blob/master/testresults.txt#L83-L99
[3] https://github.com/x4m/multixact_stress/blob/master/testresults.txt#L9
[4] https://github.com/x4m/multixact_stress/blob/master/testresults.txt#L29
[5] https://github.com/x4m/multixact_stress/blob/master/testresults.txt#L3
[6] https://github.com/x4m/multixact_stress/blob/master/testresults.txt#L19


Вложения

В списке pgsql-hackers по дате отправления:

Предыдущее
От: Tatsuro Yamada
Дата:
Сообщение: Re: list of extended statistics on psql
Следующее
От: Michael Paquier
Дата:
Сообщение: Re: [patch] Fix checksum verification in base backups for zero page headers