Re: Speed up Clog Access by increasing CLOG buffers

Поиск
Список
Период
Сортировка
От Amit Kapila
Тема Re: Speed up Clog Access by increasing CLOG buffers
Дата
Msg-id CAA4eK1+SoW3FBrdZV+3m34uCByK3DMPy_9QQs34yvN8spByzyA@mail.gmail.com
обсуждение исходный текст
Ответ на Re: Speed up Clog Access by increasing CLOG buffers  (Robert Haas <robertmhaas@gmail.com>)
Ответы Re: Speed up Clog Access by increasing CLOG buffers  (Robert Haas <robertmhaas@gmail.com>)
Список pgsql-hackers
On Wed, Dec 9, 2015 at 1:02 AM, Robert Haas <robertmhaas@gmail.com> wrote:
>
> On Thu, Dec 3, 2015 at 1:48 AM, Amit Kapila <amit.kapila16@gmail.com> wrote:
> > I think the way to address is don't add backend to Group list if it is
> > not intended to update the same page as Group leader.  For transactions
> > to be on different pages, they have to be 32768 transactionid's far apart
> > and I don't see much possibility of that happening for concurrent
> > transactions that are going to be grouped.
>
> That might work.
>

Okay, attached patch group_update_clog_v3.patch implements the above.

> >> My idea for how this could possibly work is that you could have a list
> >> of waiting backends for each SLRU buffer page.
> >
> > Won't this mean that first we need to ensure that page exists in one of
> > the buffers and once we have page in SLRU buffer, we can form the
> > list and ensure that before eviction, the list must be processed?
> > If my understanding is right, then for this to work we need to probably
> > acquire CLogControlLock in Shared mode in addition to acquiring it
> > in Exclusive mode for updating the status on page and performing
> > pending updates for other backends.
>
> Hmm, that wouldn't be good.  You're right: this is a problem with my
> idea.  We can try what you suggested above and see how that works.  We
> could also have two or more slots for groups - if a backend doesn't
> get the lock, it joins the existing group for the same page, or else
> creates a new group if any slot is unused.
>

I have implemented this idea as well in the attached patch
group_slots_update_clog_v3.patch

>  I think it might be
> advantageous to have at least two groups because otherwise things
> might slow down when some transactions are rolling over to a new page
> while others are still in flight for the previous page.  Perhaps we
> should try it both ways and benchmark.
>

Sure, I can do the benchmarks with both the patches, but before that
if you can once check whether group_slots_update_clog_v3.patch is inline
with what you have in mind then it will be helpful.

Note - I have used attached patch transaction_burner_v1.patch (extracted
from Jeff's patch upthread) to verify the transactions that fall into different
page boundaries. 

Вложения

В списке pgsql-hackers по дате отправления:

Предыдущее
От: Andres Freund
Дата:
Сообщение: Re: Error with index on unlogged table
Следующее
От: Haribabu Kommi
Дата:
Сообщение: Re: Parallel Aggregate