On Fri, Apr 20, 2018 at 6:36 PM, Adrien Nayrat
<adrien.nayrat@anayrat.info> wrote:
> Hello,
>
> I tried to understand this issue and it seems Gather node only take account of
> this own buffer usage :
>
>
> create unlogged table t1 (c1 int);
> insert into t1 select generate_series(1,1000000);
> vacuum t1;
>
> explain (analyze,buffers,timing off,costs off) select count(*) from t1;
> QUERY PLAN
> ------------------------------------------------------------------------
> Finalize Aggregate (actual rows=1 loops=1)
> Buffers: shared hit=1531
> -> Gather (actual rows=3 loops=1)
> Workers Planned: 2
> Workers Launched: 2
> Buffers: shared hit=1531
> -> Partial Aggregate (actual rows=1 loops=3)
> Buffers: shared hit=4425
> -> Parallel Seq Scan on t1 (actual rows=333333 loops=3)
> Buffers: shared hit=4425
>
>
> Same query without parallelism
>
> QUERY PLAN
> ----------------------------------------------------
> Aggregate (actual rows=1 loops=1)
> Buffers: shared hit=4425
> -> Seq Scan on t1 (actual rows=1000000 loops=1)
> Buffers: shared hit=4425
>
>
> We can notice Parallel Seq Scan and Partial Aggregate report 4425 buffers, same
> for the plan without parallelism.
>
>
> I put elog debug around theses lines in execParallel.c :
>
> /* Accumulate the statistics from all workers. */
> instrument = GetInstrumentationArray(instrumentation);
> instrument += i * instrumentation->num_workers;
> for (n = 0; n < instrumentation->num_workers; ++n)
> {
> elog(LOG, "worker %d - shared_blks_read: %ld - shared_blks_hit: %ld", n,
> instrument[n].bufusage.shared_blks_read,instrument[n].bufusage.shared_blks_hit);
> InstrAggNode(planstate->instrument, &instrument[n]);
> }
>
>
> And I get theses messages :
>
> LOG: worker 0 - shared_blks_read: 0 - shared_blks_hit: 1443
> LOG: worker 1 - shared_blks_read: 0 - shared_blks_hit: 1451
>
I think you can try 'verbose' option, it will give per-worker stats.
--
With Regards,
Amit Kapila.
EnterpriseDB: http://www.enterprisedb.com