Re: Parallel Seq Scan vs kernel read ahead

Поиск
Список
Период
Сортировка
От Soumyadeep Chakraborty
Тема Re: Parallel Seq Scan vs kernel read ahead
Дата
Msg-id CAE-ML+-wB7W9w04+Y=eVZUJ4PLvjC7EgLNJ_3o97bKvrHuBfoA@mail.gmail.com
обсуждение исходный текст
Ответ на Re: Parallel Seq Scan vs kernel read ahead  (David Rowley <dgrowleyml@gmail.com>)
Ответы Re: Parallel Seq Scan vs kernel read ahead  (David Rowley <dgrowleyml@gmail.com>)
Список pgsql-hackers
Hi David,

Apologies for the delay, I had missed these emails.

On Tue, Jul 14, 2020 at 8:52 PM David Rowley <dgrowleyml@gmail.com> wrote:

> It would be good to know if the
> regression is repeatable or if it was affected by some other process.


These are the latest results on the same setup as [1].
(TL;DR: the FreeBSD VM with Google Persistent Disk is too unstable -
there is too much variability in performance to conclude that there is a
regression)


With 100000000 rows:

master (606c384598):

max_parallel_workers_per_gather    Time(seconds)
                              0           20.09s
                              1            9.77s
                              2            9.92s
                              6            9.55s


v2 patch (applied on 606c384598):

max_parallel_workers_per_gather    Time(seconds)
                              0           18.34s
                              1            9.68s
                              2            9.15s
                              6            9.11s


The above results were averaged across 3 runs with little or no
deviation between runs. The absolute values are very different from the
results reported in [1].

So, I tried to repro the regression as I had reported in [1] with
150000000 rows:

master (449e14a561)

max_parallel_workers_per_gather    Time(seconds)
                              0         42s, 42s
                              1       395s, 393s
                              2       404s, 403s
                              6       403s, 403s

Thomas' patch (applied on 449e14a561):

max_parallel_workers_per_gather    Time(seconds)
                              0          43s,43s
                              1        203s, 42s
                              2         42s, 42s
                              6         44s, 43s


v2 patch (applied on 449e14a561):

max_parallel_workers_per_gather    Time(seconds)
                              0       274s, 403s
                              1       419s, 419s
                              2       448s, 448s
                              6       137s, 419s


As you can see, I got wildly different results with 150000000 rows (even
between runs of the same experiment)
I don't think that the environment is stable enough to tell if there is
any regression.

What I can say is that there are no processes apart from Postgres
running on the system. Also, the environment is pretty constrained -
just 1G of free hard drive space before the start of every run, when I
have 150000000 rows, apart from the mere 32M of shared buffers and only
4G of RAM.

I don't know much about Google Persistent Disk to be very honest.
Basically, I just provisioned one when I provisioned a GCP VM for testing on
FreeBSD, as Thomas had mentioned that FreeBSD UFS is a bad case for
parallel seq scan.

> It would be good to see EXPLAIN (ANALYZE, BUFFERS) with SET
> track_io_timing = on; for each value of max_parallel_workers.

As for running EXPLAIN ANALYZE, running that on this system incurs a
non-trivial amount of overhead. The overhead is simply staggering. This
is the result of pg_test_timing in the FreeBSD GCP VM:

$ /usr/local/pgsql/bin/pg_test_timing -d 50
Testing timing overhead for 50 seconds.
Per loop time including overhead: 4329.80 ns
Histogram of timing durations:
  < us   % of total      count
     1      0.00000          0
     2      0.00000          0
     4      3.08896     356710
     8     95.97096   11082616
    16      0.37748      43591
    32      0.55502      64093
    64      0.00638        737
   128      0.00118        136
   256      0.00002          2

As a point of comparison, on my local Ubuntu workstation:

$ /usr/local/pgsql/bin/pg_test_timing -d 50
Testing timing overhead for 50 seconds.
Per loop time including overhead: 22.65 ns
Histogram of timing durations:
  < us   % of total      count
     1     97.73691 2157634382
     2      2.26246   49945854
     4      0.00039       8711
     8      0.00016       3492
    16      0.00008       1689
    32      0.00000         63
    64      0.00000          1

This is why I opted to simply use \timing on.

Regards,

Soumyadeep (VMware)

[1] https://www.postgresql.org/message-id/CADwEdoqirzK3H8bB=xyJ192EZCNwGfcCa_WJ5GHVM7Sv8oenuA@mail.gmail.com



В списке pgsql-hackers по дате отправления:

Предыдущее
От: Tom Lane
Дата:
Сообщение: Re: [PATCH] fix GIN index search sometimes losing results
Следующее
От: Soumyadeep Chakraborty
Дата:
Сообщение: Re: Parallel Seq Scan vs kernel read ahead