Re: Steps inside ExecEndGather
| От | Amit Kapila |
|---|---|
| Тема | Re: Steps inside ExecEndGather |
| Дата | |
| Msg-id | CAA4eK1L69_Q4PVpEtVpTDHnYzxJ9WUyfOkF7KyDw7pAb6C9ccg@mail.gmail.com обсуждение исходный текст |
| Ответ на | Steps inside ExecEndGather (Kouhei Kaigai <kaigai@ak.jp.nec.com>) |
| Ответы |
Re: Steps inside ExecEndGather
|
| Список | pgsql-hackers |
On Mon, Oct 17, 2016 at 6:22 AM, Kouhei Kaigai <kaigai@ak.jp.nec.com> wrote:
> Hello,
>
> I'm now trying to carry extra performance statistics on CustomScan
> (like DMA transfer rate, execution time of GPU kernels, etc...)
> from parallel workers to the leader process using the DSM segment
> attached by the parallel-context.
> We can require an arbitrary length of DSM using ExecCustomScanEstimate
> hook by extension, then it looks leader/worker can share the DSM area.
> However, we have a problem on this design.
>
> Below is the implementation of ExecEndGather().
>
> void
> ExecEndGather(GatherState *node)
> {
> ExecShutdownGather(node);
> ExecFreeExprContext(&node->ps);
> ExecClearTuple(node->ps.ps_ResultTupleSlot);
> ExecEndNode(outerPlanState(node));
> }
>
> It calls ExecShutdownGather() prior to the recursive call of ExecEndNode().
> The DSM segment shall be released on this call, so child node cannot
> reference the DSM at the time of ExecEndNode().
>
Before releasing DSM, we do collect all the statistics or
instrumentation information of each node. Refer
ExecParallelFinish()->ExecParallelRetrieveInstrumentation(), so I am
wondering why can't you collect the additional information in the same
way?
--
With Regards,
Amit Kapila.
EnterpriseDB: http://www.enterprisedb.com
В списке pgsql-hackers по дате отправления: