Re: Parallel Seq Scan
От | Robert Haas |
---|---|
Тема | Re: Parallel Seq Scan |
Дата | |
Msg-id | CA+Tgmoa5FseoOdBSKeW6U_aJZ7k1RoNE8gRqk_xNURwPX-Oj3w@mail.gmail.com обсуждение исходный текст |
Ответ на | Re: Parallel Seq Scan (Amit Kapila <amit.kapila16@gmail.com>) |
Список | pgsql-hackers |
On Wed, Apr 1, 2015 at 6:30 AM, Amit Kapila <amit.kapila16@gmail.com> wrote: > On Mon, Mar 30, 2015 at 8:35 PM, Robert Haas <robertmhaas@gmail.com> wrote: >> So, suppose we have a plan like this: >> >> Append >> -> Funnel >> -> Partial Seq Scan >> -> Funnel >> -> Partial Seq Scan >> (repeated many times) >> >> In earlier versions of this patch, that was chewing up lots of DSM >> segments. But it seems to me, on further reflection, that it should >> never use more than one at a time. The first funnel node should >> initialize its workers and then when it finishes, all those workers >> should get shut down cleanly and the DSM destroyed before the next >> scan is initialized. >> >> Obviously we could do better here: if we put the Funnel on top of the >> Append instead of underneath it, we could avoid shutting down and >> restarting workers for every child node. But even without that, I'm >> hoping it's no longer the case that this uses more than one DSM at a >> time. If that's not the case, we should see if we can't fix that. >> > Currently it doesn't behave you are expecting, it destroys the DSM and > perform clean shutdown of workers (DestroyParallelContext()) at the > time of ExecEndFunnel() which in this case happens when we finish > Execution of AppendNode. > > One way to change it is do the clean up for parallel context when we > fetch last tuple from the FunnelNode (into ExecFunnel) as at that point > we are sure that we don't need workers or dsm anymore. Does that > sound reasonable to you? Yeah, I think that's exactly what we should do. -- Robert Haas EnterpriseDB: http://www.enterprisedb.com The Enterprise PostgreSQL Company
В списке pgsql-hackers по дате отправления: