Re: SLRU optimization - configurable buffer pool and partitioning the SLRU lock

Поиск
Список
Период
Сортировка
От Dilip Kumar
Тема Re: SLRU optimization - configurable buffer pool and partitioning the SLRU lock
Дата
Msg-id CAFiTN-vrkrBoj-WED=04Ggc4-bv7rzcX3bOvNbcrAKYogBhkJA@mail.gmail.com
обсуждение исходный текст
Ответ на Re: SLRU optimization - configurable buffer pool and partitioning the SLRU lock  ("Andrey M. Borodin" <x4mmm@yandex-team.ru>)
Ответы Re: SLRU optimization - configurable buffer pool and partitioning the SLRU lock  (Alvaro Herrera <alvherre@alvh.no-ip.org>)
Список pgsql-hackers
On Mon, Nov 6, 2023 at 1:05 PM Andrey M. Borodin <x4mmm@yandex-team.ru> wrote:

> > On 6 Nov 2023, at 09:09, Dilip Kumar <dilipbalaut@gmail.com> wrote:
> >
> >
> >> Having hashtable to find SLRU page in the buffer IMV is too slow. Some comments on this approach can be found here
[0].
> >> I'm OK with having HTAB for that if we are sure performance does not degrade significantly, but I really doubt
thisis the case. 
> >> I even think SLRU buffers used HTAB in some ancient times, but I could not find commit when it was changed to
linearsearch. 
> >
> > The main intention of having this buffer mapping hash is to find the
> > SLRU page faster than sequence search when banks are relatively bigger
> > in size, but if we find the cases where having hash creates more
> > overhead than providing gain then I am fine to remove the hash because
> > the whole purpose of adding hash here to make the lookup faster.  So
> > far in my test I did not find the slowness.  Do you or anyone else
> > have any test case based on the previous research on whether it
> > creates any slowness?
> PFA test benchmark_slru_page_readonly(). In this test we run SimpleLruReadPage_ReadOnly() (essential part of
TransactionIdGetStatus())
> before introducing HTAB for buffer mapping I get
> Time: 14837.851 ms (00:14.838)
> with buffer HTAB I get
> Time: 22723.243 ms (00:22.723)
>
> This hash table makes getting transaction status ~50% slower.
>
> Benchmark script I used:
> make -C $HOME/postgresMX -j 8 install && (pkill -9 postgres; rm -rf test; ./initdb test && echo
"shared_preload_libraries= 'test_slru'">> test/postgresql.conf && ./pg_ctl -D test start && ./psql -c 'create extension
test_slru'postgres && ./pg_ctl -D test restart && ./psql -c "SELECT count(test_slru_page_write(a, 'Test SLRU')) 
>   FROM generate_series(12346, 12393, 1) as a;" -c '\timing' -c "SELECT benchmark_slru_page_readonly(12377);"
postgres)

With this test, I got below numbers,

nslots.  no-hash    hash
8            10s       13s
16          10s       13s
32          15s       13s
64          17s       13s

Yeah so we can see with a small bank size <=16 slots we are seeing
that the fetching page with hash is 30% slower than the sequential
search, but beyond 32 slots sequential search is become slower as you
grow the number of slots whereas with hash it stays constant as
expected.  But now as you told if keep the lock partition range
different than the bank size then we might not have any problem by
having more numbers of banks and with that, we can keep the bank size
small like 16.  Let me put some more thought into this and get back.
Any other opinions on this?

--
Regards,
Dilip Kumar
EnterpriseDB: http://www.enterprisedb.com



В списке pgsql-hackers по дате отправления:

Предыдущее
От: Ashutosh Bapat
Дата:
Сообщение: Re: Reuse child_relids in try_partitionwise_join was Re: Assert failure on bms_equal(child_joinrel->relids, child_joinrelids)
Следующее
От: Alexander Korotkov
Дата:
Сообщение: Re: collect_corrupt_items_vacuum.patch