Re: Parallel Seq Scan vs kernel read ahead

Поиск
Список
Период
Сортировка
От Thomas Munro
Тема Re: Parallel Seq Scan vs kernel read ahead
Дата
Msg-id CA+hUKGKSAS=pgdNKG5o2OseKvuBac7WALEXG-B19c1CWdLCX2g@mail.gmail.com
обсуждение исходный текст
Ответ на Re: Parallel Seq Scan vs kernel read ahead  (Soumyadeep Chakraborty <sochakraborty@pivotal.io>)
Список pgsql-hackers
On Fri, May 22, 2020 at 1:14 PM Soumyadeep Chakraborty
<sochakraborty@pivotal.io> wrote:
> Some more data points:

Thanks!

> max_parallel_workers_per_gather    Time(seconds)
>                               0           29.04s
>                               1           29.17s
>                               2           28.78s
>                               6          291.27s
>
> I checked with explain analyze to ensure that the number of workers
> planned = max_parallel_workers_per_gather
>
> Apart from the last result (max_parallel_workers_per_gather=6), all
> the other results seem favorable.
> Could the last result be down to the fact that the number of workers
> planned exceeded the number of vCPUs?

Interesting.  I guess it has to do with patterns emerging from various
parameters like that magic number 64 I hard coded into the test patch,
and other unknowns in your storage stack.  I see a small drop off that
I can't explain yet, but not that.

> I also wanted to evaluate Zedstore with your patch.
> I used the same setup as above.
> No discernible difference though, maybe I'm missing something:

It doesn't look like it's using table_block_parallelscan_nextpage() as
a block allocator so it's not affected by the patch.  It has its own
thing zs_parallelscan_nextrange(), which does
pg_atomic_fetch_add_u64(&pzscan->pzs_allocatedtids,
ZS_PARALLEL_CHUNK_SIZE), and that macro is 0x100000.



В списке pgsql-hackers по дате отправления:

Предыдущее
От: Soumyadeep Chakraborty
Дата:
Сообщение: Re: Parallel Seq Scan vs kernel read ahead
Следующее
От: Michael Paquier
Дата:
Сообщение: More tests with USING INDEX replident and dropped indexes