Обсуждение: question about parallel. i read document and i can't solve it. provide the solution please.
question about parallel. i read document and i can't solve it. provide the solution please.
От
PG Doc comments form
Дата:
The following documentation comment has been logged on the website: Page: https://www.postgresql.org/docs/18/when-can-parallel-query-be-used.html Description: I'm inquiring because I have some problem getting parallel processing to run effectively in PostgreSQL. My current parameter settings. max_parallel_apply_workers_per_subscription = 2 max_parallel_maintenance_workers = 8 max_parallel_workers = 8 max_parallel_workers_per_gather = 4 max_worker_processes = 8 We are using a 4core CPU. When I set the table's parallel_workers to 8 or use a hint like /* + Parallel (a 8) */, the EXPLAIN ANALYZE output shows: -> Gather immediately followed by Workers Planned: 8 and Workers Launched: 2. It seems the performance is not different when I use parallel because the Workers Launched value is only 2. How can I increase the Workers Launched value to match the Workers Planned value, or at least raise it to 4? I want to use parallel to speed up for index creation and data insertion
Re: question about parallel. i read document and i can't solve it. provide the solution please.
От
"David G. Johnston"
Дата:
On Sunday, November 9, 2025, PG Doc comments form <noreply@postgresql.org> wrote:
The following documentation comment has been logged on the website:
Page: https://www.postgresql.org/docs/18/when-can-parallel- query-be-used.html
Description:
I'm inquiring because I have some problem getting parallel processing to run
effectively in PostgreSQL.
This is not the right place to seek support. We have a -general mailing list for that.
How can I increase the Workers Launched value to match the Workers Planned
value, or at least raise it to 4?
That level of control is not provided.
I want to use parallel to speed up for index creation and data insertion
This page clearly states that data writing is not parallelized.
David J.