Re: Parallel INSERT (INTO ... SELECT ...)

Поиск
Список
Период
Сортировка
От Thomas Munro
Тема Re: Parallel INSERT (INTO ... SELECT ...)
Дата
Msg-id CA+hUKG+ZjyAy2Z-fjz4QFdXauoaLVc-Z2gg5p2vtP2shUeHgHA@mail.gmail.com
обсуждение исходный текст
Ответ на Re: Parallel INSERT (INTO ... SELECT ...)  (Greg Nancarrow <gregn4422@gmail.com>)
Ответы Re: Parallel INSERT (INTO ... SELECT ...)  (Greg Nancarrow <gregn4422@gmail.com>)
Список pgsql-hackers
On Fri, Oct 9, 2020 at 3:48 PM Greg Nancarrow <gregn4422@gmail.com> wrote:
> It does give me the incentive to look beyond that issue and see
> whether parallel Update and parallel Delete are indeed possible. I'll
> be sure to give it a go!

Cool!

A couple more observations:

+       pathnode->path.parallel_aware = parallel_workers > 0 ? true : false;

Hmm, I think this may be bogus window dressing only affecting EXPLAIN.
If you change it to assign false always, it works just the same,
except EXPLAIN says:

 Gather  (cost=15428.00..16101.14 rows=1000000 width=4)
   Workers Planned: 2
   ->  Insert on s  (cost=15428.00..16101.14 rows=208334 width=4)
         ->  Parallel Hash Join  (cost=15428.00..32202.28 rows=416667 width=4)

... instead of:

 Gather  (cost=15428.00..16101.14 rows=1000000 width=4)
   Workers Planned: 2
   ->  Parallel Insert on s  (cost=15428.00..16101.14 rows=208334 width=4)
         ->  Parallel Hash Join  (cost=15428.00..32202.28 rows=416667 width=4)

AFAICS it's not parallel-aware, it just happens to be running in
parallel with a partial input and partial output (and in this case,
effect in terms of writes).  Parallel-aware is our term for nodes that
actually know they are running in parallel and do some special
coordination with their twins in other processes.

The estimated row count also looks wrong; at a guess, the parallel
divisor is applied twice.  Let me try that with
parallel_leader_particiation=off (which disables some funky maths in
the row estimation and makes it straight division by number of
processes):

 Gather  (cost=17629.00..18645.50 rows=1000000 width=4)
   Workers Planned: 2
   ->  Insert on s  (cost=17629.00..18645.50 rows=250000 width=4)
         ->  Parallel Hash Join  (cost=17629.00..37291.00 rows=500000 width=4)
[more nodes omitted]

Yeah, that was a join that spat out a million rows, and we correctly
estimated 500k per process, and then Insert (still with my hack to
turn off the bogus "Parallel" display in this case, but it doesn't
affect the estimation) estimated 250k per process, which is wrong.



В списке pgsql-hackers по дате отправления:

Предыдущее
От: Peter Eisentraut
Дата:
Сообщение: abstract Unix-domain sockets
Следующее
От: Peter Eisentraut
Дата:
Сообщение: Re: dynamic result sets support in extended query protocol