On Wed, Dec 31, 2014 at 7:50 PM, Thom Brown <
thom@linux.com> wrote:
>
>
> When attempting to recreate the plan in your example, I get an error:
>
> ➤ psql://thom@[local]:5488/pgbench
>
> # create table t1(c1 int, c2 char(500)) with (fillfactor=10);
> CREATE TABLE
> Time: 13.653 ms
>
> ➤ psql://thom@[local]:5488/pgbench
>
> # insert into t1 values(generate_series(1,100),'amit');
> INSERT 0 100
> Time: 4.796 ms
>
> ➤ psql://thom@[local]:5488/pgbench
>
> # explain select c1 from t1;
> ERROR: could not register background process
> HINT: You may need to increase max_worker_processes.
> Time: 1.659 ms
>
> ➤ psql://thom@[local]:5488/pgbench
>
> # show max_worker_processes ;
> max_worker_processes
> ----------------------
> 8
> (1 row)
>
> Time: 0.199 ms
>
> # show parallel_seqscan_degree ;
> parallel_seqscan_degree
> -------------------------
> 10
> (1 row)
>
>
> Should I really need to increase max_worker_processes to >= parallel_seqscan_degree?
Yes, as the parallel workers are implemented based on dynamic
bgworkers, so it is dependent on max_worker_processes.
> If so, shouldn't there be a hint here along with the error message pointing this out? And should the error be produced when only a *plan* is being requested?
>
I think one thing we could do minimize the chance of such an
error is set the value of parallel workers to be used for plan equal
to max_worker_processes if parallel_seqscan_degree is greater
than max_worker_processes. Even if we do this, still such an
error can come if user has registered bgworker before we could
start parallel plan execution.
> Also, I noticed that where a table is partitioned, the plan isn't parallelised:
>
>
> Is this expected?
>
Yes, to keep the initial implementation simple, it allows the
parallel plan when there is single table in query.