On Wed, Feb 15, 2017 at 4:43 AM, Amit Khandekar <amitdkhan.pg@gmail.com> wrote:
>> On 14 February 2017 at 22:35, Robert Haas <robertmhaas@gmail.com> wrote:
>>> For example, suppose that I have a scan of two children, one
>>> of which has parallel_workers of 4, and the other of which has
>>> parallel_workers of 3. If I pick parallel_workers of 7 for the
>>> Parallel Append, that's probably too high.
>
> In the patch, in such case, 7 workers are indeed selected for Parallel
> Append path, so that both the subplans are able to execute in parallel
> with their full worker capacity. Are you suggesting that we should not
> ?
Absolutely. I think that's going to be way too many workers. Imagine
that there are 100 child tables and each one is big enough to qualify
for 2 or 3 workers. No matter what value the user has selected for
max_parallel_workers_per_gather, they should not get a scan involving
200 workers.
What I was thinking about is something like this:
1. First, take the maximum parallel_workers value from among all the children.
2. Second, compute log2(num_children)+1 and round up. So, for 1
child, 1; for 2 children, 2; for 3-4 children, 3; for 5-8 children, 4;
for 9-16 children, 5, and so on.
3. Use as the number of parallel workers for the children the maximum
of the value computed in step 1 and the value computed in step 2.
With this approach, a plan with 100 children qualifies for 8 parallel
workers (unless one of the children individually qualifies for some
larger number, or unless max_parallel_workers_per_gather is set to a
smaller value). That seems fairly reasonable to me.
--
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company