Re: two index bitmap scan of a big table & hash_seq_search

Поиск
Список
Период
Сортировка
От Tom Lane
Тема Re: two index bitmap scan of a big table & hash_seq_search
Дата
Msg-id 1359.1313859837@sss.pgh.pa.us
обсуждение исходный текст
Ответ на Re: two index bitmap scan of a big table & hash_seq_search  ("Sergey E. Koposov" <math@sai.msu.ru>)
Список pgsql-hackers
"Sergey E. Koposov" <math@sai.msu.ru> writes:
> Yes, it turns out that the problem was in lossify'ing the bitmap to 
> intensely.

Yeah, I had just been coming to the same conclusion.  Your table has
about 134M pages, and if the planner estimate of 62M rows was right
(and there's no reason it shouldn't be pretty close on that) then
we're talking about a bitmap that's going to contain about one bit
set in each of about half of the pages.  The page structures are
50-some bytes apiece so a non-lossy representation would run to
3-plus GB, well beyond your work_mem limit.  So it would fill up
to work_mem and then start lossifying pages ... one at a time.
I had suspected that there might be a performance issue there,
as per the comment at line 954, but we hadn't actually seen it
reported from the field before.

> After that I changed the check in tbm_lossify()
> from:
>                  if (tbm->nentries <= tbm->maxentries)
> to:
>                  if (tbm->nentries <= (0.8*tbm->maxentries))
> which allowed the query finish in 75 seconds (comparing to 3hours).

I was about to propose using tbm->maxentries/2, which is in the same
spirit but a tad cheaper to calculate.

I think that we also need to consider the possibility that tbm_lossify
finishes its loop without ever getting under maxentries --- that could
only happen with very large tables and very small work_mem, but it could
happen.  If it did, then all subsequent operations would keep on calling
tbm_lossify, and it would keep scanning the entire hashtable and
probably not accomplishing much, and taking forever to do it.  Unless
somebody has a better idea, what I think we should do then is just
artificially inflate maxentries --- that is, accept that we are not
going to fit in the originally requested work_mem, and we might as well
set a more realistic goal.

> Do you think that this should be fixed ?

Yes, definitely.
        regards, tom lane


В списке pgsql-hackers по дате отправления:

Предыдущее
От: "Sergey E. Koposov"
Дата:
Сообщение: Re: two index bitmap scan of a big table & hash_seq_search
Следующее
От: Wojciech Muła
Дата:
Сообщение: Re: [PL/pgSQL] %TYPE and array declaration - patch