Re: Relation extension scalability

Поиск
Список
Период
Сортировка
От Dilip Kumar
Тема Re: Relation extension scalability
Дата
Msg-id CAFiTN-s-YdPoixtSEfKNpv0QMgJ_7fVyES143tyx5s58d=byPA@mail.gmail.com
обсуждение исходный текст
Ответ на Re: Relation extension scalability  (Dilip Kumar <dilipbalaut@gmail.com>)
Ответы Re: Relation extension scalability  (Amit Kapila <amit.kapila16@gmail.com>)
Список pgsql-hackers

On Wed, Feb 10, 2016 at 7:06 PM, Dilip Kumar <dilipbalaut@gmail.com> wrote:

I have tested Relation extension patch from various aspects and performance results and other statistical data are explained in the mail.

Test 1: Identify the Heavy Weight lock is the Problem or the Actual Context Switch
1. I converted the RelationExtensionLock to simple LWLock and tested with single Relation. Results are as below

This is the simple script of copy 10000 record in one transaction of size 4 Bytes
client    base    lwlock    multi_extend by 50 block
1    155    156    160
2    282    276    284
4    248    319    428
8    161    267    675
16   143    241    889

LWLock performance is better than base, obvious reason may be because we have saved some instructions by converting to LWLock but it don't scales any better compared to base code.


Test2: Identify that improvement in case of multiextend is becuase of avoiding context switch or some other factor, like reusing blocks b/w backend by putting in FSM..

1. Test by just extending multiple blocks and reuse in it's own backend (Don't put in FSM)
Insert 1K record data don't fits in shared buffer 512MB Shared Buffer           

Client    Base    Extend 800 block self use    Extend 1000 Block
1          117              131                                     118
2          111              203                                     140
3            51              242                                     178
4            51              231                                     190
5            52              259                                     224
6            51              263                                     243
7            43              253                                      254
8            43              240                                      254
16          40              190                                      243

We can see the same improvement in case of self using the blocks also, It shows that Sharing the blocks between the backend was not the WIN but avoiding context switch was the measure win.

2. Tested the Number of ProcSleep during the Run.
This is the simple script of copy 10000 record in one transaction of size 4 Bytes

                         BASE CODE                                           PATCH MULTI EXTEND
Client    Base_TPS    ProcSleep Count        Extend By 10 Block    Proc Sleep Count
2                280                       457,506                        311                    62,641
3                235                    1,098,701                        358                   141,624
4                216                    1,155,735                        368                   188,173

What we can see in above test that, in Base code performance is degrading after 2 threads, while Proc Sleep count in increasing with huge amount.

Compared to that in Patch, with extending 10 blocks at a time Proc Sleep reduce to ~1/8 and we can see it is constantly scaling.

Proc Sleep test for Insert test when data don't fits in shared buffer and inserting big record of 1024 bytes, is currently running
once I get the data will post the same.

Posting the re-based version and moving to next CF.

Open points:
1. After getting the Lock recheck the FSM if some other back end has already added extra blocks and reuse them.
2. Is it good idea to have user level parameter for extend_by_block or we can try some approach to internally identify how many blocks are needed and as per the need only add the blocks, this will make it more flexible.


--
Regards,
Dilip Kumar
EnterpriseDB: http://www.enterprisedb.com
Вложения

В списке pgsql-hackers по дате отправления:

Предыдущее
От: Kouhei Kaigai
Дата:
Сообщение: A trivial fix on extensiblenode
Следующее
От: Kyotaro HORIGUCHI
Дата:
Сообщение: Re: Support for N synchronous standby servers - take 2