Re: Scaling shared buffer eviction
От | Amit Kapila |
---|---|
Тема | Re: Scaling shared buffer eviction |
Дата | |
Msg-id | CAA4eK1KwsBJKPgG7ntw_tuCeL8N06daFW_HsO60vqvgB=CJ1fw@mail.gmail.com обсуждение исходный текст |
Ответ на | Re: Scaling shared buffer eviction (Amit Kapila <amit.kapila16@gmail.com>) |
Список | pgsql-hackers |
On Wed, Sep 3, 2014 at 9:45 AM, Amit Kapila <amit.kapila16@gmail.com> wrote:
> Performance Data:
> -------------------------------
>
> Configuration and Db Details
> IBM POWER-7 16 cores, 64 hardware threads
> RAM = 64GB
> Database Locale =C
> checkpoint_segments=256
> checkpoint_timeout =15min> scale factor = 3000Common configuration remains same as above.
> Client Count = number of concurrent sessions and threads (ex. -c 8 -j 8)
> Duration of each individual run = 5mins
>
> All the data is in tps and taken using pgbench read-only loadShared_Buffers = 500MB
Client Count/Patch_Ver 8 16 32 64 128 HEAD 56248 100112 121341 81128 56552 Patch 59389 112483 157034 185740 166725 ..
Observations---------------------1. Performance improvement is upto 2~3 times for higher clientcounts (64, 128).2. For lower client count (8), we can see 2~5 % performanceimprovement.3. Overall, this improves the read scalability.4. For lower number of shared buffers, we see that there is a minordip in tps even after patch (it might be that we can improve it bytuning higher water mark for the number of buffers on freelist, I willtry this by varying high water mark).
I have taken performance data by varying high and low mater marks
for lower value of shared buffers which is as below:
Shared_buffers = 500MB
Scale_factor = 3000
HM - High water mark, 0.5 means 0.5% of total shared buffers
LM - Low water mark, 20 means 20% of HM.
Client Count/Patch_Ver (Data in tps) | 128 |
HM=0.5;LM=20 | 166725 |
HM=1;LM=20 | 166556 |
HM=2;LM=30 | 166463 |
HM=5;LM=30 | 166107 |
HM=10;LM=30 | 167231 |
Observation
--------------------
a. There is hardly any difference by varying High and Low water marks
as compared to default values currently used in patch.
b. I think this minor dip as compare to 64 client count is because one
this m/c has 64 hardware threads due which scaling beyond 64 client
count is difficult and second at relatively lower buffer count (500MB),
there is still minor contention around Buf Mapping locks.
In general, I think with patch the scaling is much better (2 times) than
HEAD, even when shared buffers are less and client count is high,
so this is not an issue.
В списке pgsql-hackers по дате отправления: