Re: Parallel Query
От | Tomas Vondra |
---|---|
Тема | Re: Parallel Query |
Дата | |
Msg-id | 20191113204737.l6uny3qdwl6a4u7m@development обсуждение исходный текст |
Ответ на | Parallel Query (Luís Roberto Weck <luisroberto@siscobra.com.br>) |
Ответы |
Re: Parallel Query
|
Список | pgsql-performance |
On Wed, Nov 13, 2019 at 05:16:44PM -0300, Luís Roberto Weck wrote: >Hi! > >Is there a reason query 3 can't use parallel workers? Using q1 and q2 >they seem very similar but can use up to 4 workers to run faster: > >q1: https://pastebin.com/ufkbSmfB >q2: https://pastebin.com/Yt32zRNX >q3: https://pastebin.com/dqh7yKPb > >The sort node on q3 takes almost 12 seconds, making the query run on >68 if I had set enough work_mem to make it all in memory. > Most likely because it'd be actually slower. The trouble is the aggregation does not actually reduce the cardinality, or at least the planner does not expect that - the Sort and GroupAggregate are expected to produce 3454539 rows. The last step of the aggregation has to receive and merge data from all workers, which is not exactly free, and if there is no reduction of cardinality it's likely cheaper to just do everything in a single process serially. How does the explain analyze output look like without the HAVING clause? Try setting parallel_setup_cost and parallel_tuple_cost to 0. That might trigger parallel query. regards -- Tomas Vondra http://www.2ndQuadrant.com PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services
В списке pgsql-performance по дате отправления: