Re: Gather performance analysis

Поиск
Список
Период
Сортировка
От Dilip Kumar
Тема Re: Gather performance analysis
Дата
Msg-id CAFiTN-stk2j3V9RxfUs_jnKMDqjhQuMpAcqtR14G-ROY6KqHrQ@mail.gmail.com
обсуждение исходный текст
Ответ на Re: Gather performance analysis  (Tomas Vondra <tomas.vondra@enterprisedb.com>)
Ответы Re: Gather performance analysis  (Tomas Vondra <tomas.vondra@enterprisedb.com>)
Re: Gather performance analysis  (Robert Haas <robertmhaas@gmail.com>)
Список pgsql-hackers
On Tue, Sep 7, 2021 at 8:41 PM Tomas Vondra <tomas.vondra@enterprisedb.com> wrote:
Hi,

The numbers presented in this thread seem very promising - clearly
there's significant potential for improvements. I'll run similar
benchmarks too, to get a better understanding of this.

Thanks for showing interest.
 

Can you share some basic details about the hardware you used?
Particularly the CPU model - I guess this might explain some of the
results, e.g. if CPU caches are ~1MB, that'd explain why setting
tup_queue_size to 1MB improves things, but 4MB is a bit slower.
Similarly, number of cores might explain why 4 workers perform better
than 8 or 16 workers.

I have attached the output of the lscpu.  I think batching the data before updating in the shared memory will win because we are avoiding the frequent cache misses and IMHO the benefit will be more in the machine with more CPU sockets.

Now, this is mostly expected, but the consequence is that maybe things
like queue size should be tunable/dynamic, not hard-coded?

Actually, my intention behind the tuple queue size was to just see the behavior. Do we really have the problem of workers stalling on queue while sending the tuple, the perf report showed some load on WaitLatch on the worker side so I did this experiment.  I saw some benefits but it was not really huge.  I am not sure whether we want to just increase the tuple queue size or make it tunable,  but if we want to support redistribute operators in future sometime then maybe we should make it dynamically growing at runtime, maybe using dsa or dsa + shared files.
 
As for the patches, I think the proposed changes are sensible, but I
wonder what queries might get slower. For example with the batching
(updating the counter only once every 4kB, that pretty much transfers
data in larger chunks with higher latency. So what if the query needs
only a small chunk, like a LIMIT query? Similarly, this might mean the
upper parts of the plan have to wait for the data for longer, and thus
can't start some async operation (like send them to a FDW, or something
like that). I do admit those are theoretical queries, I haven't tried
creating such query.

Yeah, I was thinking about such cases, basically, this design can increase the startup cost of the Gather node, I will also try to derive such cases and test them.
 

FWIW I've tried applying both patches at the same time, but there's a
conflict in shm_mq_sendv - not a complex one, but I'm not sure what's
the correct solution. Can you share a "combined" patch?

Actually, these both patches are the same, "v1-0001-Optimize-parallel-tuple-send-shm_mq_send_bytes.patch" is the cleaner version of the first patch.  For configurable tuple queue size I did not send a patch, because that is I just used for the testing purpose and never intended to to propose anything.  My most of the latest performance data I sent with only "v1-0001-Optimize-parallel-tuple-send-shm_mq_send_bytes.patch" and with default tuple queue size.

But I am attaching both the patches in case you want to play around.


--
Regards,
Dilip Kumar
EnterpriseDB: http://www.enterprisedb.com
Вложения

В списке pgsql-hackers по дате отправления:

Предыдущее
От: Kyotaro Horiguchi
Дата:
Сообщение: Re: Allow escape in application_name
Следующее
От: Michael Paquier
Дата:
Сообщение: Re: [UNVERIFIED SENDER] Re: Challenges preventing us moving to 64 bit transaction id (XID)?