Re: [HACKERS] Speed up Clog Access by increasing CLOG buffers

Поиск
Список
Период
Сортировка
От Ashutosh Sharma
Тема Re: [HACKERS] Speed up Clog Access by increasing CLOG buffers
Дата
Msg-id CAE9k0PkdmKwpdZG9FX_5pZafYCetS814a3WoXA2ng1hzjvWueg@mail.gmail.com
обсуждение исходный текст
Ответ на Re: [HACKERS] Speed up Clog Access by increasing CLOG buffers  (Amit Kapila <amit.kapila16@gmail.com>)
Ответы Re: [HACKERS] Speed up Clog Access by increasing CLOG buffers  (Amit Kapila <amit.kapila16@gmail.com>)
Список pgsql-hackers
Hi All,

I have tried to test 'group_update_clog_v11.1.patch' shared upthread by Amit on a high end machine. I have tested the patch with various savepoints in my test script. The machine details along with test scripts and the test results are shown below,

Machine details:
============
24 sockets, 192 CPU(s)
RAM - 500GB

test script:
========

\set aid random (1,30000000)
\set tid random (1,3000)

BEGIN;
SELECT abalance FROM pgbench_accounts WHERE aid = :aid for UPDATE;
SAVEPOINT s1;
SELECT tbalance FROM pgbench_tellers WHERE tid = :tid for UPDATE;
SAVEPOINT s2;
SELECT abalance FROM pgbench_accounts WHERE aid = :aid for UPDATE;
SAVEPOINT s3;
SELECT tbalance FROM pgbench_tellers WHERE tid = :tid for UPDATE;
SAVEPOINT s4;
SELECT abalance FROM pgbench_accounts WHERE aid = :aid for UPDATE;
SAVEPOINT s5;
SELECT tbalance FROM pgbench_tellers WHERE tid = :tid for UPDATE;
END;

Non-default parameters
==================
max_connections = 200
shared_buffers=8GB
min_wal_size=10GB
max_wal_size=15GB
maintenance_work_mem = 1GB
checkpoint_completion_target = 0.9
checkpoint_timeout=900
synchronous_commit=off


pgbench -M prepared -c $thread -j $thread -T $time_for_reading postgres -f ~/test_script.sql

where, time_for_reading = 10 mins

Test Results:
=========

With 3 savepoints
=============

CLIENT COUNTTPS (HEAD)TPS (PATCH)% IMPROVEMENT
12850275537046.82048732
6462860665615.887686923
818464187521.559792028


With 5 savepoints
=============

CLIENT COUNTTPS (HEAD)TPS (PATCH)% IMPROVEMENT
12846559477152.482871196
645230652082-0.4282491492
812289128524.581332899



With 7 savepoints
=============

CLIENT COUNTTPS (HEAD)TPS (PATCH)% IMPROVEMENT
12841367415000.3215123166
644299641473-3.542189971
896659657-0.0827728919


With 10 savepoints
==============

CLIENT COUNTTPS (HEAD)TPS (PATCH)% IMPROVEMENT
12834513345970.24338655
643258132035-1.675823333
8729376224.511175099

Conclusion:
As seen from the test results mentioned above, there is some performance improvement with 3 SP(s), with 5 SP(s) the results with patch is slightly better than HEAD, with 7 and 10 SP(s) we do see regression with patch. Therefore, I think the threshold value of 4 for number of subtransactions considered in the patch looks fine to me.


--
With Regards,
Ashutosh Sharma
EnterpriseDB:http://www.enterprisedb.com


On Tue, Mar 21, 2017 at 6:19 PM, Amit Kapila <amit.kapila16@gmail.com> wrote:
On Mon, Mar 20, 2017 at 8:27 AM, Robert Haas <robertmhaas@gmail.com> wrote:
> On Fri, Mar 17, 2017 at 2:30 AM, Amit Kapila <amit.kapila16@gmail.com> wrote:
>>> I was wondering about doing an explicit test: if the XID being
>>> committed matches the one in the PGPROC, and nsubxids matches, and the
>>> actual list of XIDs matches, then apply the optimization.  That could
>>> replace the logic that you've proposed to exclude non-commit cases,
>>> gxact cases, etc. and it seems fundamentally safer.  But it might be a
>>> more expensive test, too, so I'm not sure.
>>
>> I think if the number of subxids is very small let us say under 5 or
>> so, then such a check might not matter, but otherwise it could be
>> expensive.
>
> We could find out by testing it.  We could also restrict the
> optimization to cases with just a few subxids, because if you've got a
> large number of subxids this optimization probably isn't buying much
> anyway.
>

Yes, and I have modified the patch to compare xids and subxids for
group update.  In the initial short tests (with few client counts), it
seems like till 3 savepoints we can win and 10 savepoints onwards
there is some regression or at the very least there doesn't appear to
be any benefit.  We need more tests to identify what is the safe
number, but I thought it is better to share the patch to see if we
agree on the changes because if not, then the whole testing needs to
be repeated.  Let me know what do you think about attached?



--
With Regards,
Amit Kapila.
EnterpriseDB: http://www.enterprisedb.com


--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


В списке pgsql-hackers по дате отправления:

Предыдущее
От: Thomas Munro
Дата:
Сообщение: Re: [HACKERS] WIP: [[Parallel] Shared] Hash
Следующее
От: Craig Ringer
Дата:
Сообщение: Re: [HACKERS] Logical decoding on standby