Re: Parallel Seq Scan

Поиск
Список
Период
Сортировка
От Robert Haas
Тема Re: Parallel Seq Scan
Дата
Msg-id CA+TgmoYRFA42UpZLK9_CTRN5fOOr73kK69xv3mvJa7H_uqyvPw@mail.gmail.com
обсуждение исходный текст
Ответ на Re: Parallel Seq Scan  (Haribabu Kommi <kommi.haribabu@gmail.com>)
Ответы Re: Parallel Seq Scan  (Haribabu Kommi <kommi.haribabu@gmail.com>)
Список pgsql-hackers
On Fri, Sep 18, 2015 at 4:03 AM, Haribabu Kommi
<kommi.haribabu@gmail.com> wrote:
> On Thu, Sep 3, 2015 at 8:21 PM, Amit Kapila <amit.kapila16@gmail.com> wrote:
>>
>> Attached, find the rebased version of patch.
>
> Here are the performance test results:

Thanks, this is really interesting.  I'm very surprised by how much
kernel overhead this shows.  I wonder where that's coming from.  The
writes to and reads from the shm_mq shouldn't need to touch the kernel
at all except for page faults; that's why I chose this form of IPC.
It could be that the signals which are sent for flow control are
chewing up a lot of cycles, but if that's the problem, it's not very
clear from here.  copy_user_generic_string doesn't sound like
something related to signals.  And why all the kernel time in
_spin_lock?  Maybe perf -g would help us tease out where this kernel
time is coming from.

Some of this may be due to rapid context switching.  Suppose the
master process is the bottleneck.  Then each worker will fill up the
queue and go to sleep.  When the master reads a tuple, the worker has
to wake up and write a tuple, and then it goes back to sleep.  This
might be an indication that we need a bigger shm_mq size.  I think
that would be experimenting with: if we double or quadruple or
increase by 10x the queue size, what happens to performance?

-- 
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company



В списке pgsql-hackers по дате отправления:

Предыдущее
От: Nikolay Shaplov
Дата:
Сообщение: Re: pageinspect patch, for showing tuple data
Следующее
От: "David E. Wheeler"
Дата:
Сообщение: Re: tsvector work with citext