Re: [HACKERS] Poor memory context performance in large hash joins
| От | Tom Lane |
|---|---|
| Тема | Re: [HACKERS] Poor memory context performance in large hash joins |
| Дата | |
| Msg-id | 10401.1487888906@sss.pgh.pa.us обсуждение |
| Ответ на | [HACKERS] Poor memory context performance in large hash joins (Jeff Janes <jeff.janes@gmail.com>) |
| Ответы |
Re: [HACKERS] Poor memory context performance in large hash joins
Re: [HACKERS] Poor memory context performance in large hash joins |
| Список | pgsql-hackers |
Jeff Janes <jeff.janes@gmail.com> writes:
> The number of new chunks can be almost as as large as the number of old
> chunks, especially if there is a very popular value. The problem is that
> every time an old chunk is freed, the code in aset.c around line 968 has to
> walk over all the newly allocated chunks in the linked list before it can
> find the old one being freed. This is an N^2 operation, and I think it has
> horrible CPU cache hit rates as well.
Maybe it's time to convert that to a doubly-linked list. Although if the
hash code is producing a whole lot of requests that are only a bit bigger
than the separate-block threshold, I'd say It's Doing It Wrong. It should
learn to aggregate them into larger requests.
regards, tom lane
В списке pgsql-hackers по дате отправления: