Re: SELECT DISTINCT chooses parallel seqscan instead of indexscan on huge table with 1000 partitions

Поиск
Список
Период
Сортировка
От Dimitrios Apostolou
Тема Re: SELECT DISTINCT chooses parallel seqscan instead of indexscan on huge table with 1000 partitions
Дата
Msg-id 660a8477-4130-40da-3492-f8827c5c3596@gmx.net
обсуждение исходный текст
Ответ на Re: SELECT DISTINCT chooses parallel seqscan instead of indexscan on huge table with 1000 partitions  (Tom Lane <tgl@sss.pgh.pa.us>)
Ответы Re: SELECT DISTINCT chooses parallel seqscan instead of indexscan on huge table with 1000 partitions
Список pgsql-general

On Fri, 10 May 2024, Tom Lane wrote:

> Dimitrios Apostolou <jimis@gmx.net> writes:
>> Further digging into this simple query, if I force the non-parallel plan
>> by setting max_parallel_workers_per_gather TO 0, I see that the query
>> planner comes up with a cost much higher:
>
>>   Limit  (cost=363.84..1134528847.47 rows=10 width=4)
>>     ->  Unique  (cost=363.84..22690570036.41 rows=200 width=4)
>>           ->  Append  (cost=363.84..22527480551.58 rows=65235793929 width=4)
>> ...
>
>> The total cost on the 1st line (cost=363.84..1134528847.47) has a much
>> higher upper limit than the total cost when
>> max_parallel_workers_per_gather is 4 (cost=853891608.79..853891608.99).
>> This explains the planner's choice. But I wonder why the cost estimation
>> is so far away from reality.
>
> I'd say the blame lies with that (probably-default) estimate of
> just 200 distinct rows.  That means the planner expects to have
> to read about 5% (10/200) of the tables to get the result, and
> that's making fast-start plans look bad.
>
> Possibly an explicit ANALYZE on the partitioned table would help.

It took long but if finished:

ANALYZE
Time: 19177398.025 ms (05:19:37.398)

And it made a difference indeed, the serial plan is chosen now:

EXPLAIN SELECT DISTINCT workitem_n FROM test_runs_raw ORDER BY workitem_n DESC LIMIT 10;
  Limit  (cost=364.29..1835512.29 rows=10 width=4)
    ->  Unique  (cost=364.29..22701882164.56 rows=123706 width=4)
          ->  Append  (cost=364.29..22538472401.60 rows=65363905182 width=4)
                ->  Index Only Scan Backward using test_runs_raw__part_max20000k_pkey on test_runs_raw__part_max20000k
test_runs_raw_1000 (cost=0.12..2.34 rows=1 width=4) 
                ->  Index Only Scan Backward using test_runs_raw__part_max19980k_pkey on test_runs_raw__part_max19980k
test_runs_raw_999 (cost=0.12..2.34 rows=1 width=4) 
                ->  Index Only Scan Backward using test_runs_raw__part_max19960k_pkey on test_runs_raw__part_max19960k
test_runs_raw_998 (cost=0.12..2.34 rows=1 width=4) 
[...]
                ->  Index Only Scan Backward using test_runs_raw__part_max12460k_pkey on test_runs_raw__part_max12460k
test_runs_raw_623 (cost=0.57..12329614.53 rows=121368496 width=4) 
                ->  Index Only Scan Backward using test_runs_raw__part_max12440k_pkey on test_runs_raw__part_max12440k
test_runs_raw_622 (cost=0.57..5180832.16 rows=184927264 width=4) 
                ->  Index Only Scan Backward using test_runs_raw__part_max12420k_pkey on test_runs_raw__part_max12420k
test_runs_raw_621 (cost=0.57..4544964.21 rows=82018824 width=4) 
[...]

Overall I think there are two issues that postgres could handle better
here:

1. Avoid the need for manual ANALYZE on partitioned table

2. Create a different parallel plan, one that can exit early, when the
    LIMIT is proportionally low. I feel the partitions could be
    parallel-scanned in-order, so that the whole thing stops when one
    partition has been read.

Thank you!
Dimitris




В списке pgsql-general по дате отправления:

Предыдущее
От: Rama Krishnan
Дата:
Сообщение: Valid until
Следующее
От: Adrian Klaver
Дата:
Сообщение: Re: Valid until