Re: strange slow query - lost lot of time somewhere

Поиск
Список
Период
Сортировка
От David Rowley
Тема Re: strange slow query - lost lot of time somewhere
Дата
Msg-id CAApHDvrQoR4NknjOif+zC6N=CRw1rh83kptztBPgOK_i9tjHdw@mail.gmail.com
обсуждение исходный текст
Ответ на Re: strange slow query - lost lot of time somewhere  (David Rowley <dgrowleyml@gmail.com>)
Ответы Re: strange slow query - lost lot of time somewhere  (David Rowley <dgrowleyml@gmail.com>)
Re: strange slow query - lost lot of time somewhere  (Justin Pryzby <pryzby@telsasoft.com>)
Список pgsql-hackers
On Fri, 6 May 2022 at 20:04, David Rowley <dgrowleyml@gmail.com> wrote:
> Thanks for the report.  I think I now see the problem.  Looking at
> [1], it seems that's a bushy plan. That's fine, but less common than a
> left deep plan.

On second thoughts, it does not need to be a bushy plan for the outer
side of the nested loop to be parameterized by some higher-level
nested loop. There's an example of a plan like this in the regression
tests.

regression=# explain (analyze, costs off, summary off)
regression-# select * from tenk1 t1 left join
regression-# (tenk1 t2 join tenk1 t3 on t2.thousand = t3.unique2)
regression-#  on t1.hundred = t2.hundred and t1.ten = t3.ten
regression-# where t1.unique1 = 1;
                                               QUERY PLAN
---------------------------------------------------------------------------------------------------------
 Nested Loop Left Join (actual time=0.258..0.487 rows=20 loops=1)
   ->  Index Scan using tenk1_unique1 on tenk1 t1 (actual
time=0.049..0.049 rows=1 loops=1)
         Index Cond: (unique1 = 1)
   ->  Nested Loop (actual time=0.204..0.419 rows=20 loops=1)
         Join Filter: (t1.ten = t3.ten)
         Rows Removed by Join Filter: 80
         ->  Bitmap Heap Scan on tenk1 t2 (actual time=0.064..0.194
rows=100 loops=1)
               Recheck Cond: (t1.hundred = hundred)
               Heap Blocks: exact=86
               ->  Bitmap Index Scan on tenk1_hundred (actual
time=0.036..0.036 rows=100 loops=1)
                     Index Cond: (hundred = t1.hundred)
         ->  Memoize (actual time=0.001..0.001 rows=1 loops=100)
               Cache Key: t2.thousand
               Cache Mode: logical
               Hits: 90  Misses: 10  Evictions: 0  Overflows: 0
Memory Usage: 4kB
               ->  Index Scan using tenk1_unique2 on tenk1 t3 (actual
time=0.009..0.009 rows=1 loops=10)
                     Index Cond: (unique2 = t2.thousand)
(17 rows)

debugging this I see that the memorize plan won because it was passing
10000 as the number of calls. It should have been passing 100.  The
memorize node's number of loops agrees with that. Fixing the calls to
correctly pass 100 gets rid of the Memoize node.

I've attached a patch to fix.  I'll look at it in more detail after the weekend.

I'm very tempted to change the EXPLAIN output in at least master to
display the initial and final (maximum) hash table sizes. Wondering if
anyone would object to that?

David

Вложения

В списке pgsql-hackers по дате отправления:

Предыдущее
От: Peter Smith
Дата:
Сообщение: Fix typo in code comment - origin.c
Следующее
От: Robert Haas
Дата:
Сообщение: Re: Configuration Parameter/GUC value validation hook