Обсуждение: Re: [COMMITTERS] pgsql: Add max_parallel_workers GUC.
Robert Haas <rhaas@postgresql.org> writes: > Add max_parallel_workers GUC. > Increase the default value of the existing max_worker_processes GUC > from 8 to 16, and add a new max_parallel_workers GUC with a maximum > of 8. This broke buildfarm members coypu and sidewinder. It appears the reason is that those machines can only get up to 30 server processes, cf this pre-failure initdb trace: http://buildfarm.postgresql.org/cgi-bin/show_stage_log.pl?nm=coypu&dt=2016-12-02%2006%3A30%3A49&stg=initdb-C creating directory data-C ... ok creating subdirectories ... ok selecting default max_connections ... 30 selecting default shared_buffers ... 128MB selecting dynamic shared memory implementation ... sysv creating configuration files ... ok running bootstrap script ... ok performing post-bootstrap initialization ... ok syncing data to disk ... ok So you've reduced their available number of regular backends to less than 20, which is why their tests are now dotted with ! psql: FATAL: sorry, too many clients already There may well be other machines with similar issues; we won't know until today's other breakage clears. We could ask the owners of these machines to reduce the test parallelism via the MAX_CONNECTIONS makefile variable, but I wonder whether this increase was well thought out in the first place. regards, tom lane
On Dec 2, 2016, at 4:07 PM, Tom Lane <tgl@sss.pgh.pa.us> wrote: > Robert Haas <rhaas@postgresql.org> writes: >> Add max_parallel_workers GUC. >> Increase the default value of the existing max_worker_processes GUC >> from 8 to 16, and add a new max_parallel_workers GUC with a maximum >> of 8. > > This broke buildfarm members coypu and sidewinder. It appears the reason > is that those machines can only get up to 30 server processes, cf this > pre-failure initdb trace: > > http://buildfarm.postgresql.org/cgi-bin/show_stage_log.pl?nm=coypu&dt=2016-12-02%2006%3A30%3A49&stg=initdb-C > > creating directory data-C ... ok > creating subdirectories ... ok > selecting default max_connections ... 30 > selecting default shared_buffers ... 128MB > selecting dynamic shared memory implementation ... sysv > creating configuration files ... ok > running bootstrap script ... ok > performing post-bootstrap initialization ... ok > syncing data to disk ... ok > > So you've reduced their available number of regular backends to less than > 20, which is why their tests are now dotted with > > ! psql: FATAL: sorry, too many clients already > > There may well be other machines with similar issues; we won't know until > today's other breakage clears. > > We could ask the owners of these machines to reduce the test parallelism > via the MAX_CONNECTIONS makefile variable, but I wonder whether this > increase was well thought out in the first place. Signs point to "no". It seemed like a good idea to leave some daylight between max_parallel_workers and max_worker_processes,but evidently this wasn't the way to get there. Or else we should just give up on that thought. ...Robert
On 12/2/16 2:34 PM, Robert Haas wrote: > Signs point to "no". It seemed like a good idea to leave some daylight between max_parallel_workers and max_worker_processes,but evidently this wasn't the way to get there. Or else we should just give up on that thought. Could the defaults be scaled based on max_connections, with a max on the default? -- Jim Nasby, Data Architect, Blue Treble Consulting, Austin TX Experts in Analytics, Data Architecture and PostgreSQL Data in Trouble? Get it in Treble! http://BlueTreble.com 855-TREBLE2 (855-873-2532)
Jim Nasby <Jim.Nasby@BlueTreble.com> writes: > On 12/2/16 2:34 PM, Robert Haas wrote: >> Signs point to "no". It seemed like a good idea to leave some daylight between max_parallel_workers and max_worker_processes,but evidently this wasn't the way to get there. Or else we should just give up on that thought. > Could the defaults be scaled based on max_connections, with a max on the > default? Might work. We've had very bad luck with GUC variables with interdependent defaults, but maybe the user-visible knob could be a percentage of max_connections or something like that. regards, tom lane
On Dec 2, 2016, at 5:45 PM, Tom Lane <tgl@sss.pgh.pa.us> wrote: > Jim Nasby <Jim.Nasby@BlueTreble.com> writes: >>> On 12/2/16 2:34 PM, Robert Haas wrote: >>> Signs point to "no". It seemed like a good idea to leave some daylight between max_parallel_workers and max_worker_processes,but evidently this wasn't the way to get there. Or else we should just give up on that thought. > >> Could the defaults be scaled based on max_connections, with a max on the >> default? > > Might work. We've had very bad luck with GUC variables with > interdependent defaults, but maybe the user-visible knob could be a > percentage of max_connections or something like that. Seems like overkill. Let's just reduce the values a bit. ...Robert
Robert Haas <robertmhaas@gmail.com> writes: > On Dec 2, 2016, at 5:45 PM, Tom Lane <tgl@sss.pgh.pa.us> wrote: >> Might work. We've had very bad luck with GUC variables with >> interdependent defaults, but maybe the user-visible knob could be a >> percentage of max_connections or something like that. > Seems like overkill. Let's just reduce the values a bit. Agreed. How about max_worker_processes = 8 as before, with max_parallel_workers of maybe 6? Or just set them both to 8. I'm not sure that the out-of-the-box configuration needs to leave backend slots locked down for non-parallel worker processes. Any such process would require manual configuration anyway no? regards, tom lane
On Sat, Dec 3, 2016 at 11:43 AM, Tom Lane <tgl@sss.pgh.pa.us> wrote: > Robert Haas <robertmhaas@gmail.com> writes: >> On Dec 2, 2016, at 5:45 PM, Tom Lane <tgl@sss.pgh.pa.us> wrote: >>> Might work. We've had very bad luck with GUC variables with >>> interdependent defaults, but maybe the user-visible knob could be a >>> percentage of max_connections or something like that. > >> Seems like overkill. Let's just reduce the values a bit. > > Agreed. How about max_worker_processes = 8 as before, with > max_parallel_workers of maybe 6? Or just set them both to 8. > I'm not sure that the out-of-the-box configuration needs to > leave backend slots locked down for non-parallel worker processes. > Any such process would require manual configuration anyway no? Sure, you'd have to arrange to load the relevant module somehow. It would be nicer if we didn't have to require additional configuration beyond that, but I'm not prepared to ask BF owners to reconfigure their systems just for that marginal advantage, so I think we'll have to live with this for now. I pushed a commit backing out the increased default, which I originally suggested. Mea culpa. -- Robert Haas EnterpriseDB: http://www.enterprisedb.com The Enterprise PostgreSQL Company