Re: Speed up Clog Access by increasing CLOG buffers

Поиск
Список
Период
Сортировка
От Dilip Kumar
Тема Re: Speed up Clog Access by increasing CLOG buffers
Дата
Msg-id CAFiTN-tr_=25EQUFezKNRk=4N-V+D6WMxo7HWs9BMaNx7S3y6w@mail.gmail.com
обсуждение исходный текст
Ответ на Re: Speed up Clog Access by increasing CLOG buffers  (Dilip Kumar <dilipbalaut@gmail.com>)
Список pgsql-hackers
On Wed, Sep 21, 2016 at 8:47 AM, Dilip Kumar <dilipbalaut@gmail.com> wrote:
> Summary:
> --------------
> At 32 clients no gain, I think at this workload Clog Lock is not a problem.
> At 64 Clients we can see ~10% gain with simple update and ~5% with TPCB.
> At 128 Clients we can see > 50% gain.
>
> Currently I have tested with synchronous commit=off, later I can try
> with on. I can also test at 80 client, I think we will see some
> significant gain at this client count also, but as of now I haven't
> yet tested.
>
> With above results, what we think ? should we continue our testing ?

I have done further testing with on TPCB workload to see the impact on
performance gain by increasing scale factor.

Again at 32 client there is no gain, but at 64 client gain is 12% and
at 128 client it's 75%, it shows that improvement with group lock is
better at higher scale factor (at 300 scale factor gain was 5% at 64
client and 50% at 128 clients).

8 socket machine (kernel 3.10)
10 min run(median of 3 run)
synchronous_commit=off
scal factor = 1000
share buffer= 40GB

Test results:
----------------

client      head         group lock
32          27496       27178
64          31275       35205
128        20656        34490


LWLOCK_STATS approx. block count on ClogControl Lock ("lwlock main 11")
--------------------------------------------------------------------------------------------------------
client      head      group lock
32          80000      60000
64        150000     100000
128      140000       70000

Note: These are approx. block count, I have detailed result of
LWLOCK_STAT, incase someone wants to look into.


LWLOCK_STATS shows that ClogControl lock block count reduced by 25% at
32 client, 33% at 64 client and 50% at 128 client.

Conclusion:
1. I think both  LWLOCK_STATS and performance data shows that we get
significant contention reduction on ClogControlLock with the patch.
2. It also shows that though we are not seeing any performance gain at
32 clients, but there is contention reduction with patch.

I am planning to do some more test with higher scale factor (3000 or more).

-- 
Regards,
Dilip Kumar
EnterpriseDB: http://www.enterprisedb.com



В списке pgsql-hackers по дате отправления:

Предыдущее
От: Kyotaro HORIGUCHI
Дата:
Сообщение: Re: Fix checkpoint skip logic on idle systems by tracking LSN progress
Следующее
От: Dave Cramer
Дата:
Сообщение: Re: PL/Python adding support for multi-dimensional arrays