Re: MultiXact\SLRU buffers configuration

Поиск
Список
Период
Сортировка
От Kyotaro Horiguchi
Тема Re: MultiXact\SLRU buffers configuration
Дата
Msg-id 20200514.102526.1602501255796880628.horikyota.ntt@gmail.com
обсуждение исходный текст
Ответ на Re: MultiXact\SLRU buffers configuration  ("Andrey M. Borodin" <x4mmm@yandex-team.ru>)
Ответы Re: MultiXact\SLRU buffers configuration  ("Andrey M. Borodin" <x4mmm@yandex-team.ru>)
Список pgsql-hackers
At Wed, 13 May 2020 23:08:37 +0500, "Andrey M. Borodin" <x4mmm@yandex-team.ru> wrote in
>
>
> > 11 мая 2020 г., в 16:17, Andrey M. Borodin <x4mmm@yandex-team.ru> написал(а):
> >
> > I've went ahead and created 3 patches:
> > 1. Configurable SLRU buffer sizes for MultiXacOffsets and MultiXactMembers
> > 2. Reduce locking level to shared on read of MultiXactId members
> > 3. Configurable cache size
>
> I'm looking more at MultiXact and it seems to me that we have a race condition there.
>
> When we create a new MultiXact we do:
> 1. Generate new MultiXactId under MultiXactGenLock
> 2. Record new mxid with members and offset to WAL
> 3. Write offset to SLRU under MultiXactOffsetControlLock
> 4. Write members to SLRU under MultiXactMemberControlLock

But, don't we hold exclusive lock on the buffer through all the steps
above?

> When we read MultiXact we do:
> 1. Retrieve offset by mxid from SLRU under MultiXactOffsetControlLock
> 2. If offset is 0 - it's not filled in at step 4 of previous algorithm, we sleep and goto 1
> 3. Retrieve members from SLRU under MultiXactMemberControlLock
> 4. ..... what we do if there are just zeroes because step 4 is not executed yet? Nothing, return empty members list.

So transactions never see such incomplete mxids, I believe.

> What am I missing?

regards.

--
Kyotaro Horiguchi
NTT Open Source Software Center



В списке pgsql-hackers по дате отправления:

Предыдущее
От: Gurjeet Singh
Дата:
Сообщение: Re: JSON output from psql
Следующее
От: Tom Lane
Дата:
Сообщение: Re: Our naming of wait events is a disaster.