Re: 9.2beta1, parallel queries, ReleasePredicateLocks, CheckForSerializableConflictIn in the oprofile

Поиск
Список
Период
Сортировка
От Merlin Moncure
Тема Re: 9.2beta1, parallel queries, ReleasePredicateLocks, CheckForSerializableConflictIn in the oprofile
Дата
Msg-id CAHyXU0yd6RhDX2ySqo5yy+Kj_hdMC+ZXXHWTGGL__4x8QLr_EQ@mail.gmail.com
обсуждение исходный текст
Ответ на Re: 9.2beta1, parallel queries, ReleasePredicateLocks, CheckForSerializableConflictIn in the oprofile  (Florian Pflug <fgp@phlo.org>)
Ответы Re: 9.2beta1, parallel queries, ReleasePredicateLocks, CheckForSerializableConflictIn in the oprofile  (Tom Lane <tgl@sss.pgh.pa.us>)
Список pgsql-hackers
On Fri, Jun 1, 2012 at 7:47 AM, Florian Pflug <fgp@phlo.org> wrote:
> On May31, 2012, at 20:50 , Robert Haas wrote:
>> Suppose we introduce two new buffer flags,
>> BUF_NAILED and BUF_NAIL_REMOVAL.  When we detect excessive contention
>> on the buffer header spinlock, we set BUF_NAILED.  Once we do that,
>> the buffer can't be evicted until that flag is removed, and backends
>> are permitted to record pins in a per-backend area protected by a
>> per-backend spinlock or lwlock, rather than in the buffer header.
>> When we want to un-nail the buffer, we set BUF_NAIL_REMOVAL.  At that
>> point, it's no longer permissible to record new pins in the
>> per-backend areas, but old ones may still exist.  So then we scan all
>> the per-backend areas and transfer the pins to the buffer header, or
>> else just wait until no more exist; then, we clear both BUF_NAILED and
>> BUF_NAIL_REMOVAL.
>
> A simpler idea would be to collapse UnpinBuffer() / PinBuffer() pairs
> by queing UnpinBuffer() requests for a while before actually updating
> shared state.
>
> I'm imagining having a small unpin queue with, say, 32 entries in
> backend-local memory. When we unpin a buffer, we add the buffer at the
> front of the queue. If the queue overflows, we dequeue a buffer from the
> back of the queue and actually call UnpinBuffer(). If PinBuffer() is called
> for a queued buffer, we simply remove the buffer from the queue.
>
> We'd drain the unpin queue whenever we don't expect a PinBuffer() request
> to happen for a while. Returning to the main loop is an obvious such place,
> but there might be others. We could, for example, drain the queue every time
> we block on a lock or signal, and maybe also before we go do I/O. Or, we
> could have one such queue per resource owner, and drain it when we release
> the resource owner.
>
> We already avoid calling PinBuffer() multiple times for multiple overlapping
> pins of a single buffer by a single backend. The strategy above would extend
> that to not-quite-overlapping pins.

A potential issue with this line of thinking is that your pin delay
queue could get highly pressured by outer portions of the query (as in
the OP's case)  that will get little or no benefit from the delayed
pin.  But choosing a sufficiently sized drain queue would work for
most reasonable cases assuming 32 isn't enough?  Why not something
much larger, for example the lesser of 1024, (NBuffers * .25) /
max_connections?  In other words, for you to get much benefit, you
have to pin the buffer sufficiently more than 1/N times among all
buffers.

Or, maybe you only put contended buffers in the delay queue by
watching for the delay count that is going to be returned from s_lock.That forces a lookup on each pin though.

merlin


В списке pgsql-hackers по дате отправления:

Предыдущее
От: Tom Lane
Дата:
Сообщение: Re: Re: [COMMITTERS] pgsql: Checkpointer starts before bgwriter to avoid missing fsync reque
Следующее
От: Tom Lane
Дата:
Сообщение: Re: 9.2beta1, parallel queries, ReleasePredicateLocks, CheckForSerializableConflictIn in the oprofile