Re: Scaling shared buffer eviction

Поиск
Список
Период
Сортировка
От Robert Haas
Тема Re: Scaling shared buffer eviction
Дата
Msg-id CA+TgmoaBreKS=jjATtvF_Ec9S_sc2cO3UG-itLGDwn1+SoHdDg@mail.gmail.com
обсуждение исходный текст
Ответ на Re: Scaling shared buffer eviction  (Amit Kapila <amit.kapila16@gmail.com>)
Ответы Re: Scaling shared buffer eviction  (Kevin Grittner <kgrittn@ymail.com>)
Список pgsql-hackers
On Thu, Sep 4, 2014 at 7:25 AM, Amit Kapila <amit.kapila16@gmail.com> wrote:
> Its not difficult to handle such cases, but it can have downside also
> for the cases where demand from backends is not high.
> Consider in above case if instead of 500 more allocations, it just
> does 5 more allocations, then bgreclaimer will again have to go through
> the list and move 5 buffers and same can happen again by the time
> it moves 5 buffers.

That's exactly the scenario in which we *want* the looping behavior.
If that's happening, then it means it's taking us exactly as long to
find 5 buffers as it takes the rest of the system to use 5 buffers.
We need to run continuously to keep up.

>> It's not.  But if they are in the same cache line, they will behave
>> almost like one lock, because the CPU will lock the entire cache line
>> for each atomic op.  See Tom's comments upthread.
>
> I think to avoid having them in same cache line, we might need to
> add some padding (at least 72 bytes) as the structure size including both
> the spin locks is 56 bytes on PPC64 m/c and cache line size is 128 bytes.
> I have taken performance data as well by keeping them further apart
> as suggested by you upthread and by introducing padding, but the
> difference in performance is less than 1.5% (on 64 and 128 client count)
> which also might be due to variation of data across runs.  So now to
> proceed we have below options:
>
> a. use two spinlocks as in patch, but keep them as far apart as possible.
> This might not have an advantage as compare to what is used currently
> in patch, but in future we can adding padding to take the advantage if
> possible (currently on PPC64, it doesn't show any noticeable advantage,
> however on some other m/c, it might show the advantage).
>
> b. use only one spinlock, this can have disadvantage in certain cases
> as mentioned upthread, however those might not be usual cases, so for
> now we can consider them as lower priority and can choose this option.

I guess I don't care that much.  I only mentioned it because Tom
brought it up; I don't really see a big problem with the way you're
doing it.

> Another point in this regard is that I have to make use of volatile
> pointer to prevent code rearrangement in this case.

Yep.  Or we need to get off our duff and fix it so that's not necessary.

-- 
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company



В списке pgsql-hackers по дате отправления:

Предыдущее
От: Robert Haas
Дата:
Сообщение: Re: PL/pgSQL 1.2
Следующее
От: Robert Haas
Дата:
Сообщение: Re: Spinlocks and compiler/memory barriers