Re: Scaling shared buffer eviction

Поиск
Список
Период
Сортировка
От Andres Freund
Тема Re: Scaling shared buffer eviction
Дата
Msg-id 20141009140155.GC29124@awork2.int
обсуждение исходный текст
Ответ на Re: Scaling shared buffer eviction  (Amit Kapila <amit.kapila16@gmail.com>)
Ответы Re: Scaling shared buffer eviction
Re: Scaling shared buffer eviction
Список pgsql-hackers
On 2014-10-09 18:17:09 +0530, Amit Kapila wrote:
> On Fri, Sep 26, 2014 at 7:04 PM, Robert Haas <robertmhaas@gmail.com> wrote:
> >
> > On another point, I think it would be a good idea to rebase the
> > bgreclaimer patch over what I committed, so that we have a
> > clean patch against master to test with.
> 
> Please find the rebased patch attached with this mail.  I have taken
> some performance data as well and done some analysis based on
> the same.
> 
> Performance Data
> ----------------------------
> IBM POWER-8 24 cores, 192 hardware threads
> RAM = 492GB
> max_connections =300
> Database Locale =C
> checkpoint_segments=256
> checkpoint_timeout    =15min
> shared_buffers=8GB
> scale factor = 5000
> Client Count = number of concurrent sessions and threads (ex. -c 8 -j 8)
> Duration of each individual run = 5mins

I don't think OLTP really is the best test case for this. Especially not
pgbench with relatilvely small rows *and* a uniform distribution of
access.

Try parallel COPY TO. Batch write loads is where I've seen this hurt
badly.

> patch_ver/client_count 1 8 32 64 128 256
> HEAD 18884 118628 251093 216294 186625 177505
> PATCH 18743 122578 247243 205521 179712 175031

So, pretty much no benefits on any scale, right?


> Here we can see that the performance dips at higher client
> count(>=32) which was quite surprising for me, as I was expecting
> it to improve, because bgreclaimer reduces the contention by making
> buffers available on free list.  So I tried to analyze the situation by
> using perf and found that in above configuration, there is a contention
> around freelist spinlock with HEAD and the same is removed by Patch,
> but still the performance goes down with Patch.  On further analysis, I
> observed that actually after Patch there is an increase in contention
> around ProcArrayLock (shared LWlock) via GetSnapshotData which
> sounds bit odd, but that's what I can see in profiles.  Based on analysis,
> few ideas which I would like to further investigate are:
> a.  As there is an increase in spinlock contention, I would like to check
> with Andres's latest patch which reduces contention around shared
> lwlocks.
> b.  Reduce some instructions added by patch in StrategyGetBuffer(),
> like instead of awakening bgreclaimer at low threshold, awaken when
> it tries to do clock sweep.
> 

Are you sure you didn't mix up the profiles here? The head vs. patched
look more like profiles from different client counts than different
versions of the code.


Greetings,

Andres Freund

-- Andres Freund                       http://www.2ndQuadrant.com/PostgreSQL Development, 24x7 Support, Training &
Services



В списке pgsql-hackers по дате отправления:

Предыдущее
От: Tom Lane
Дата:
Сообщение: Re: Log notice that checkpoint is to be written on shutdown
Следующее
От: Amit Kapila
Дата:
Сообщение: Re: Scaling shared buffer eviction