Re: Parallel Seq Scan vs kernel read ahead

Поиск
Список
Период
Сортировка
От David Rowley
Тема Re: Parallel Seq Scan vs kernel read ahead
Дата
Msg-id CAApHDvrXjU2tPV5BE1_WBKBAO8V2xb3R5S04Z+jC6Sinrx6EKw@mail.gmail.com
обсуждение исходный текст
Ответ на Re: Parallel Seq Scan vs kernel read ahead  (Robert Haas <robertmhaas@gmail.com>)
Ответы Re: Parallel Seq Scan vs kernel read ahead  (Robert Haas <robertmhaas@gmail.com>)
Список pgsql-hackers
On Wed, 17 Jun 2020 at 03:20, Robert Haas <robertmhaas@gmail.com> wrote:
>
> On Mon, Jun 15, 2020 at 5:09 PM David Rowley <dgrowleyml@gmail.com> wrote:
> > * Perhaps when there are less than 2 full chunks remaining, workers
> > can just take half of what is left. Or more specifically
> > Max(pg_next_power2(remaining_blocks) / 2, 1), which ideally would work
> > out allocating an amount of pages proportional to the amount of beer
> > each mathematician receives in the "An infinite number of
> > mathematicians walk into a bar" joke, obviously with the exception
> > that we stop dividing when we get to 1. However, I'm not quite sure
> > how well that can be made to work with multiple bartenders working in
> > parallel.
>
> That doesn't sound nearly aggressive enough to me. I mean, let's
> suppose that we're concerned about the scenario where one chunk takes
> 50x as long as all the other chunks. Well, if we have 1024 chunks
> total, and we hit the problem chunk near the beginning, there will be
> no problem. In effect, there are 1073 units of work instead of 1024,
> and we accidentally assigned one guy 50 units of work when we thought
> we were assigning 1 unit of work. If there's enough work left that we
> can assign each other worker 49 units more than what we would have
> done had that chunk been the same cost as all the others, then there's
> no problem. So for instance if there are 4 workers, we can still even
> things out if we hit the problematic chunk more than ~150 chunks from
> the end. If we're closer to the end than that, there's no way to avoid
> the slow chunk delaying the overall completion time, and the problem
> gets worse as the problem chunk gets closer to the end.

I've got something like that in the attached.  Currently, I've set the
number of chunks to 2048 and I'm starting the ramp down when 64 chunks
remain, which means we'll start the ramp-down when there's about 3.1%
of the scan remaining. I didn't see the point of going with the larger
number of chunks and having ramp-down code.

Attached is the patch and an .sql file with a function which can be
used to demonstrate what chunk sizes the patch will choose and demo
the ramp-down.

e.g.
# select show_parallel_scan_chunks(1000000, 2048, 64);

It would be really good if people could test this using the test case
mentioned in [1]. We really need to get a good idea of how this
behaves on various operating systems.

With a 32TB relation, the code will make the chunk size 16GB.  Perhaps
I should change the code to cap that at 1GB.

David

[1] https://www.postgresql.org/message-id/CAApHDvrfJfYH51_WY-iQqPw8yGR4fDoTxAQKqn%2BSa7NTKEVWtg%40mail.gmail.com

Вложения

В списке pgsql-hackers по дате отправления:

Предыдущее
От: Jehan-Guillaume de Rorthais
Дата:
Сообщение: Re: [patch] demote
Следующее
От: Jehan-Guillaume de Rorthais
Дата:
Сообщение: Re: [patch] demote