Re: [HACKERS] Parallel Append implementation

Поиск
Список
Период
Сортировка
От Amit Kapila
Тема Re: [HACKERS] Parallel Append implementation
Дата
Msg-id CAA4eK1+VdHKkT_m1OU+kwB8jJD+Ob8KrYPEVKjEaxPYf=RkdoA@mail.gmail.com
обсуждение исходный текст
Ответ на Re: [HACKERS] Parallel Append implementation  (Amit Khandekar <amitdkhan.pg@gmail.com>)
Ответы Re: [HACKERS] Parallel Append implementation  (Amit Khandekar <amitdkhan.pg@gmail.com>)
Список pgsql-hackers
On Fri, Sep 8, 2017 at 3:59 PM, Amit Khandekar <amitdkhan.pg@gmail.com> wrote:
> On 7 September 2017 at 11:05, Amit Kapila <amit.kapila16@gmail.com> wrote:
>> On Thu, Aug 31, 2017 at 12:47 PM, Amit Khandekar <amitdkhan.pg@gmail.com> wrote:
>> 3.
>> +/* ----------------------------------------------------------------
>> + * exec_append_leader_next
>> + *
>> + * To be used only if it's a parallel leader. The backend should scan
>> + * backwards from the last plan. This is to prevent it from taking up
>> + * the most expensive non-partial plan, i.e. the first subplan.
>> + * ----------------------------------------------------------------
>> + */
>> +static bool
>> +exec_append_leader_next(AppendState *state)
>>
>> From above explanation, it is clear that you don't want backend to
>> pick an expensive plan for a leader, but the reason for this different
>> treatment is not clear.
>
> Explained it, saying that for more workers, a leader spends more work
> in processing the worker tuples , and less work contributing to
> parallel processing. So it should not take expensive plans, otherwise
> it will affect the total time to finish Append plan.
>

In that case, why can't we keep the workers also process in same
order, what is the harm in that?  Also, the leader will always scan
the subplans from last subplan even if all the subplans are partial
plans.  I think this will be the unnecessary difference in the
strategy of leader and worker especially when all paths are partial.
I think the selection of next subplan might become simpler if we use
the same strategy for worker and leader.

Few more comments:

1.
+ else if (IsA(subpath, MergeAppendPath))
+ {
+ MergeAppendPath *mpath = (MergeAppendPath *) subpath;
+
+ /*
+ * If at all MergeAppend is partial, all its child plans have to be
+ * partial : we don't currently support a mix of partial and
+ * non-partial MergeAppend subpaths.
+ */
+ if (is_partial)
+ return list_concat(partial_subpaths, list_copy(mpath->subpaths));

In which situation partial MergeAppendPath is generated?  Can you
provide one example of such path?

2.
add_paths_to_append_rel()
{
..
+ /* Consider parallel append path. */
+ if (pa_subpaths_valid)
+ {
+ AppendPath *appendpath;
+ int parallel_workers;
+
+ parallel_workers = get_append_num_workers(pa_partial_subpaths,
+  pa_nonpartial_subpaths);
+ appendpath = create_append_path(rel, pa_nonpartial_subpaths,
+ pa_partial_subpaths,
+ NULL, parallel_workers, true,
+ partitioned_rels);
+ add_partial_path(rel, (Path *) appendpath);
+ }
+ /*
- * Consider an append of partial unordered, unparameterized partial paths.
+ * Consider non-parallel partial append path. But if the parallel append
+ * path is made out of all partial subpaths, don't create another partial
+ * path; we will keep only the parallel append path in that case. */
- if (partial_subpaths_valid)
+ if (partial_subpaths_valid && !pa_all_partial_subpaths) { AppendPath *appendpath; ListCell   *lc; int
parallel_workers= 0;
 
 /*
- * Decide on the number of workers to request for this append path.
- * For now, we just use the maximum value from among the members.  It
- * might be useful to use a higher number if the Append node were
- * smart enough to spread out the workers, but it currently isn't.
+ * To decide the number of workers, just use the maximum value from
+ * among the children. */ foreach(lc, partial_subpaths) {
@@ -1421,9 +1502,9 @@ add_paths_to_append_rel(PlannerInfo *root,
RelOptInfo *rel, } Assert(parallel_workers > 0);

- /* Generate a partial append path. */
- appendpath = create_append_path(rel, partial_subpaths, NULL,
- parallel_workers, partitioned_rels);
+ appendpath = create_append_path(rel, NIL, partial_subpaths,
+ NULL, parallel_workers, false,
+ partitioned_rels); add_partial_path(rel, (Path *) appendpath); }
..
}

I think it might be better to add a sentence why we choose a different
way to decide a number of workers in the second case
(non-parallel-aware append).  Do you think non-parallel-aware Append
will be better in any case when there is a parallel-aware append?  I
mean to say let's try to create non-parallel-aware append only when
parallel-aware append is not possible.

3.
+ * evaluates to a value just a bit greater than max(w1,w2, w3). So, we

The spacing between w1, w2, w3 is not same.

4.
-  select count(*) from a_star;
-select count(*) from a_star;
+  select round(avg(aa)), sum(aa) from a_star;
+select round(avg(aa)), sum(aa) from a_star;

Why you have changed the existing test. It seems count(*) will also
give what you are expecting.



-- 
With Regards,
Amit Kapila.
EnterpriseDB: http://www.enterprisedb.com


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

В списке pgsql-hackers по дате отправления:

Предыдущее
От: Tom Lane
Дата:
Сообщение: Re: [HACKERS] pgbench tap tests & minor fixes.
Следующее
От: Tom Lane
Дата:
Сообщение: [HACKERS] Reminder: 10rc1 wraps Monday