Обсуждение: How to query with more workers on a large table with many partitions

Поиск
Список
Период
Сортировка

How to query with more workers on a large table with many partitions

От
Gabriel Sánchez
Дата:
Hi PostgreSQL community,

I have a large table (86 GB) that is declaratively partitioned by year, sub-partitioned by month, and sub-sub-partitioned by day. It currently has 437 leaf partitions, and will continue to grow with a new partition each day. When I query a simple count(*) by date, the query planner plans a query with five workers, but I have the following in postgresql.conf:

max_worker_processes = 12
max_parallel_workers_per_gather = 12
max_parallel_workers = 12
shared_buffers = 32GB
temp_buffers = 1GB
work_mem = 512MB

The top-level partitioned table has been ANALYZEd.

I'm running PostgreSQL 16 on an AWS EC2 instance with 16 logical processors and 128G of RAM. How can I get PG to run the query with more workers? 

Thank you,
Gabriel


Re: How to query with more workers on a large table with many partitions

От
Greg Hennessy
Дата:

I'm running PostgreSQL 16 on an AWS EC2 instance with 16 logical processors and 128G of RAM. How can I get PG to run the query with more workers? 

Postgres allocates more workers based on the log3 of the ratio of the table size to min_parallel_table_scan_size.

You may want to try ALTER TABLE ... SET (parallel_workers = 10) (or whatever your desired value is).