Re: Cache Hash Index meta page.

Поиск
Список
Период
Сортировка
От Jeff Janes
Тема Re: Cache Hash Index meta page.
Дата
Msg-id CAMkU=1ybzfXJUOdU2pD4riH3VzTkT3L4=8+vPCUVS=yRb7gjbA@mail.gmail.com
обсуждение исходный текст
Ответ на Cache Hash Index meta page.  (Mithun Cy <mithun.cy@enterprisedb.com>)
Ответы Re: Cache Hash Index meta page.  (Mithun Cy <mithun.cy@enterprisedb.com>)
Список pgsql-hackers

On Fri, Jul 22, 2016 at 3:02 AM, Mithun Cy <mithun.cy@enterprisedb.com> wrote:

I have created a patch to cache the meta page of Hash index in backend-private memory. This is to save reading the meta page buffer every time when we want to find the bucket page. In “_hash_first” call, we try to read meta page buffer twice just to make sure bucket is not split after we found bucket page. With this patch meta page buffer read is not done, if the bucket is not split after caching the meta page.

Idea is to cache the Meta page data in rd_amcache and store maxbucket number in hasho_prevblkno of bucket primary page (which will always be NULL other wise, so reusing it here for this cause!!!). So when we try to do hash lookup for bucket page if locally cached maxbucket number is greater than or equal to bucket page's maxbucket number then we can say given bucket is not split after we have cached the meta page. Hence avoid reading meta page buffer.

I have attached the benchmark results and perf stats (refer hash_index_perf_stat_and_benchmarking.odc [sheet 1: perf stats; sheet 2: Benchmark results). There we can see improvements at higher clients, as lwlock contentions due to buffer read are more at higher clients. If I apply the same patch on Amit's concurrent hash index patch [1] we can see improvements at lower clients also. Amit's patch has removed a heavy weight page lock which was the bottle neck at lower clients.

[1] Concurrent Hash Indexes


Hi Mithun,

Can you describe your benchmarking machine?  Your benchmarking data went up to 128 clients.  But how many cores does the machine have?  Are you testing how well it can use the resources it has, or how well it can deal with oversubscription of the resources?

Also, was the file supposed to be named .ods?  I didn't find it to be openable as an .odc file.

Cheers,

Jeff


В списке pgsql-hackers по дате отправления:

Предыдущее
От: Fabien COELHO
Дата:
Сообщение: Re: pgbench more operators & functions
Следующее
От: Tom Lane
Дата:
Сообщение: Re: longfin