Re: Scaling shared buffer eviction

Поиск
Список
Период
Сортировка
От Amit Kapila
Тема Re: Scaling shared buffer eviction
Дата
Msg-id CAA4eK1KwsBJKPgG7ntw_tuCeL8N06daFW_HsO60vqvgB=CJ1fw@mail.gmail.com
обсуждение исходный текст
Ответ на Re: Scaling shared buffer eviction  (Amit Kapila <amit.kapila16@gmail.com>)
Список pgsql-hackers
On Wed, Sep 3, 2014 at 9:45 AM, Amit Kapila <amit.kapila16@gmail.com> wrote:

> Performance Data:
> -------------------------------
>
> Configuration and Db Details
> IBM POWER-7 16 cores, 64 hardware threads
> RAM = 64GB
> Database Locale =C
> checkpoint_segments=256
> checkpoint_timeout    =15min
> scale factor = 3000
> Client Count = number of concurrent sessions and threads (ex. -c 8 -j 8)
> Duration of each individual run = 5mins
>
> All the data is in tps and taken using pgbench read-only load

Common configuration remains same as above.

Shared_Buffers = 500MB
Client Count/Patch_Ver8163264128
HEAD562481001121213418112856552
Patch59389112483157034185740166725

..
 
Observations
---------------------
1. Performance improvement is upto 2~3 times for higher client
counts (64, 128).
2. For lower client count (8), we can see 2~5 % performance
improvement.
3. Overall, this improves the read scalability.
4. For lower number of shared buffers, we see that there is a minor
dip in tps even after patch (it might be that we can improve it by
tuning higher water mark for the number of buffers on freelist, I will
try this by varying high water mark).

I have taken performance data by varying high and low mater marks
for lower value of shared buffers which is as below:

Shared_buffers = 500MB
Scale_factor     = 3000

HM - High water mark, 0.5 means 0.5% of total shared buffers
LM - Low water mark, 20 means 20% of HM.

Client Count/Patch_Ver (Data in tps)128
HM=0.5;LM=20166725
HM=1;LM=20166556
HM=2;LM=30166463
HM=5;LM=30166107
HM=10;LM=30167231
 

Observation
--------------------
a. There is hardly any difference by varying High and Low water marks
as compared to default values currently used in patch.
b. I think this minor dip as compare to 64 client count is because one
this m/c has 64 hardware threads due which scaling beyond 64 client
count is difficult and second at relatively lower buffer count (500MB),
there is still minor contention around Buf Mapping locks.

In general, I think with patch the scaling is much better (2 times) than
HEAD, even when shared buffers are less and client count is high, 
so this is not an issue.


With Regards,
Amit Kapila.
EnterpriseDB: http://www.enterprisedb.com

В списке pgsql-hackers по дате отправления:

Предыдущее
От: Marko Tiikkaja
Дата:
Сообщение: Re: PL/pgSQL 1.2
Следующее
От: Pavel Stehule
Дата:
Сообщение: Re: PL/pgSQL 1.2