Re: [HACKERS] Poor memory context performance in large hash joins

Поиск
Список
Период
Сортировка
От Peter Geoghegan
Тема Re: [HACKERS] Poor memory context performance in large hash joins
Дата
Msg-id CAH2-Wz=uF=Qe0n0atK17vbdQ0LMkF6QQU9she9aaZA_BLs+_mw@mail.gmail.com
обсуждение исходный текст
Ответ на [HACKERS] Poor memory context performance in large hash joins  (Jeff Janes <jeff.janes@gmail.com>)
Список pgsql-hackers
On Thu, Feb 23, 2017 at 2:13 PM, Jeff Janes <jeff.janes@gmail.com> wrote:
> Is there a good solution to this?  Could the new chunks be put in a
> different memory context, and then destroy the old context and install the
> new one at the end of ExecHashIncreaseNumBatches? I couldn't find a destroy
> method for memory contexts, it looks like you just reset the parent instead.
> But I don't think that would work here.

Are you aware of the fact that tuplesort.c got a second memory context
for 9.6, entirely on performance grounds?


-- 
Peter Geoghegan



В списке pgsql-hackers по дате отправления:

Предыдущее
От: Jeff Janes
Дата:
Сообщение: [HACKERS] Poor memory context performance in large hash joins
Следующее
От: Tom Lane
Дата:
Сообщение: Re: [HACKERS] Poor memory context performance in large hash joins