Re: Move unused buffers to freelist

Поиск
Список
Период
Сортировка
От Amit Kapila
Тема Re: Move unused buffers to freelist
Дата
Msg-id 00cc01ce5240$2da362b0$88ea2810$@kapila@huawei.com
обсуждение исходный текст
Ответ на Re: Move unused buffers to freelist  (Amit Kapila <amit.kapila@huawei.com>)
Список pgsql-hackers

On Wednesday, May 15, 2013 8:38 AM Amit Kapila wrote:

> On Wednesday, May 15, 2013 12:44 AM Greg Smith wrote:

> > On 5/14/13 9:42 AM, Amit Kapila wrote:

> > > In the attached patch, bgwriter/checkpointer moves unused

> > (usage_count

> > > =0 && refcount = 0) buffer's to end of freelist. I have implemented

> a

> > > new API StrategyMoveBufferToFreeListEnd() to

> >

> > There's a comment in the new function:

> >

> > It is possible that we are told to put something in the freelist that

> > is already in it; don't screw up the list if so.

> >

> > I don't see where the code does anything to handle that though.  What

> > was your intention here?

>

> The intention is that put the entry in freelist only if it is not in

> freelist which is accomplished by check

> If (buf->freeNext == FREENEXT_NOT_IN_LIST). Every entry when removed

> from

> freelist, buf->freeNext is marked as FREENEXT_NOT_IN_LIST.

> Code Reference (last line):

> StrategyGetBuffer()

> {

> ..

> ..

> while (StrategyControl->firstFreeBuffer >= 0)

>         {

>                 buf = &BufferDescriptors[StrategyControl-

> >firstFreeBuffer];

>                 Assert(buf->freeNext != FREENEXT_NOT_IN_LIST);

>

>                 /* Unconditionally remove buffer from freelist */

>                 StrategyControl->firstFreeBuffer = buf->freeNext;

>                 buf->freeNext = FREENEXT_NOT_IN_LIST;

>

> ...

> }

>

> Also the same check exists in StrategyFreeBuffer().

>

> > This area has always been the tricky part of the change.  If you do

> > something complicated when adding new entries, like scanning the

> > freelist for duplicates, you run the risk of holding BufFreelistLock

> > for

> > too long.

>

> Yes, this is true and I had tried to hold this lock for minimal time.

> In this patch, it holds BufFreelistLock only to put the unused buffer

> at end

> of freelist.

>

> > To try and see that in benchmarks, I would use a small

> > database scale (I typically use 100 for this type of test) and a

> large

> > number of clients.

 

I shall try this test, do you have any suggestions for shred buffers and number of clients for 100 scale factor?

 

> >"-M prepared" would help get a higher transaction

> > rate out of the hardware too.  It might take a server with a large

> core

> > count to notice any issues with holding the lock for too long though.

>

> This is good idea, I shall take another set of readings with "-M

> prepared"

>

> > Instead you might just invalidate buffers before they go onto the

> list.

> >   Doing that will then throw away usefully cached data though.

>

> Yes, if we invalidate buffers, it might throw away usefully cached data

> especially when working set just a tiny bit smaller than

> shared_buffers.

> This is pointed by Robert in his mail

> http://www.postgresql.org/message-

> id/CA+TgmoYhWsz__KtSxm6BuBirE7VR6Qqc_COkbE

> ZTQpk8oom3CA@mail.gmail.com

>

>

> > To try and optimize both insertion speed and retaining cached data,

>

> I think by the method proposed by patch it takes care of both, because

> it

> directly puts free buffer at end of freelist and

> because it doesn't invalidate the buffers it can retain cached data for

> longer period.

> Do you see any flaw with current approach?

>

> > I

> > was thinking about using a hash table for the free buffers, instead

> of

> > the simple linked list approach used in the code now.

>

> Okay, we can try different methods for maintaining free buffers if we

> find

> current approach doesn't turn out to be good.

>

> > Also:  check the formatting on the additions to in bufmgr.c, I

> noticed

> > a

> > spaces vs. tabs difference on lines 35/36 of your patch.

>

> Thanks for pointing it, I shall send an updated patch along with next

> set of

> performance data.

 

 

Further Performance Data:

 

Below data is for average 3 runs of 20 minutes

Scale Factor   - 1200

Shared Buffers - 7G

 

 

                   8C-8T                16C-16T               32C-32T            64C-64T

HEAD               1739                   1461                 1578                 1609

After Patch        4029                   1924                 1743                 1706

 

 

Scale Factor   - 1200

Shared Buffers – 10G

 

                   8C-8T                16C-16T               32C-32T            64C-64T

HEAD               2004                   2270                 2195                 2173

After Patch        2298                   2172                 2111                 2044

 

 

Detailed data of 3 runs is attached with mail.

 

Observations :

 

1.  For scale factor 1200, With 5G and 7G Shared buffers,

a.  there is reasonably good performance after patch (>50%).

b.  However the performance increase is not so good when number of clients-threads increase.

The reason for it can be that at higher number of clients/threads, there are other blocking factors(other LWLocks, I/O) that limit the benefit of moving buffers to freelist

2.  For scale factor 1200, With 10G Shared buffers,

a.  Performance increase is observed for 8 clients/8 threads reading

b.  There is performance dip (3~6%) from 16C onwards. The reasons could be

a.  that with such a long buffer list, actually taking BufFreeListLock by BGwriter frequently (bgwrite_delay = 200ms) can add to Concurrency overhead which is overcoming the need for getting

buffer from freelist.

b.  The other reason is sometimes it comes to free the buffer which is already in freelist. It can also add to small overhead as currently to check weather buffer is in freelist, we need to take BufFreeListLock

 

I will try to find more reasons for 2b and work to resolve performance dip of 2b.

 

Any suggestions will be really helpful to proceed and crack this problem.

 

With Regards,

Amit Kapila.

 

 

 

 

В списке pgsql-hackers по дате отправления: