Обсуждение: [HACKERS] Horrible CREATE DATABASE Performance in High Sierra

Поиск
Список
Период
Сортировка

[HACKERS] Horrible CREATE DATABASE Performance in High Sierra

От
Brent Dearth
Дата:

I just recently "upgraded" to High Sierra and experiencing horrendous CREATE DATABASE performance. Creating a database from a 3G template DB used to take ~1m but post-upgrade is taking ~22m at a sustained write of around 4MB/s. Occasionally, attempting to create an empty database hangs indefinitely as well. When this happens, restarting the Postgres server allows empty database initialization in ~1s.

I had been running on an encrypted APFS volume (FileVault), but after dropping encryption, saw the tasks drop to about ~15m per run. Still a far cry from the previous ~1m threshold.

A multi-threaded pg_restore seems to sustain writes of ~38M/s and completes in about the same time as pre-upgrade (Sierra), so I'm not sure it's entirely related to APFS / disk IO.

I've completely rebuilt the Postgres data directory, re-installed Postgres (Postgres.app 2.0.5) etc. I don't have any reasonable explanation for what could have broken so catastrophically.

Coworker has seen the exact same issue. Has anyone else experienced this yet or have any insight as to what could be happening?


Thanks in advance!

Re: [HACKERS] Horrible CREATE DATABASE Performance in High Sierra

От
Tom Lane
Дата:
Brent Dearth <brent.dearth@gmail.com> writes:
> I just recently "upgraded" to High Sierra and experiencing horrendous CREATE
> DATABASE performance. Creating a database from a 3G template DB used to
> take ~1m but post-upgrade is taking ~22m at a sustained write of around
> 4MB/s. Occasionally, attempting to create an empty database hangs
> indefinitely as well. When this happens, restarting the Postgres server
> allows empty database initialization in ~1s.

What PG version are you running?

I tried to reproduce this, using HEAD and what I had handy:
(a) Late 2016 MacBook Pro, 2.7GHz i7, still on Sierra
(b) Late 2013 MacBook Pro, 2.3GHz i7, High Sierra, drive is converted to APFS

I made a ~7.5GB test database using "pgbench -i -s 500 bench" and
then cloned it with "create database b2 with template bench".

Case 1: fsync off.
Machine A did the clone in 5.6 seconds, machine B in 12.9 seconds.

Considering the CPU speed difference and the fact that Apple put
significantly faster SSDs into the 2016 models, I'm not sure this
difference is due to anything but better hardware.

Case 2: fsync on.
Machine A did the clone in 7.5 seconds, machine B in 2523.5 sec (42 min!).

So something is badly busted in APFS' handling of fsync, and/or
we're doing it in a bad way.

Interestingly, pg_test_fsync shows only about a factor-of-2 difference
in the timings for regular file fsyncs.  So I poked into non-fsync
logic that we'd added recently, and after awhile found that diking out
the msync code path in pg_flush_data reduces machine B's time to an
entirely reasonable 11.5 seconds.

In short, therefore, APFS cannot cope with the way we're using msync().
I observe that the copy gets slower and slower as it runs (watching the
transfer rate with "iostat 1"), which makes me wonder if there's some
sort of O(N^2) issue in the kernel logic for this.  But anyway, as
a short-term workaround you might try

diff --git a/src/backend/storage/file/fd.c b/src/backend/storage/file/fd.c
index b0c174284b..af35de5f7d 100644
--- a/src/backend/storage/file/fd.c
+++ b/src/backend/storage/file/fd.c
@@ -451,7 +451,7 @@ pg_flush_data(int fd, off_t offset, off_t nbytes)       return;   }#endif
-#if !defined(WIN32) && defined(MS_ASYNC)
+#if 0   {       void       *p;       static int  pagesize = 0;

        regards, tom lane


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

Re: [HACKERS] Horrible CREATE DATABASE Performance in High Sierra

От
Tom Lane
Дата:
I wrote:
> In short, therefore, APFS cannot cope with the way we're using msync().

I experimented with this further by seeing whether the msync() code path
is of any value on Sierra either.  The answer seems to be "no": cloning
a scale-1000 pgbench database takes about 17-18 seconds on my Sierra
laptop using unmodified HEAD, but if I dike out the msync() logic then
it takes 16-17 seconds.  Both numbers jump around a little, but using
msync is strictly worse.

I propose therefore that an appropriate fix is to unconditionally disable
the msync code path on Darwin, as we have already done for Windows.  When
and if Apple changes their kernel so that this path is actually of some
value, we can figure out how to detect whether to use it.

The msync logic seems to date back to this thread:

https://www.postgresql.org/message-id/flat/alpine.DEB.2.10.1506011320000.28433%40sto

wherein Andres opined
>> I think this patch primarily needs:
>> * Benchmarking on FreeBSD/OSX to see whether we should enable the
>>   mmap()/msync(MS_ASYNC) method by default. Unless somebody does so, I'm
>>   inclined to leave it off till then.

but so far as I can tell from the thread, only testing on FreeBSD ever
got done.  So there's no evidence that this was ever beneficial on macOS,
and we now have evidence that it's between counterproductive and
catastrophic depending on which kernel version you look at.
        regards, tom lane


--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

Re: [HACKERS] Horrible CREATE DATABASE Performance in High Sierra

От
Brent Dearth
Дата:
Thanks for this breakdown Tom! 

FWIW - I'm on Postgres 9.6.5 as bundled with Postgres.app (2.0.5) running on 2013 MBP (2.7GHz i7 / 16GB / SSD) setup. It looks like this might be a priority for an upcoming release, so I might try to hold out for downstream, but thanks for the patch. It will help if we need get custom builds out to fellow devs if this becomes too unbearable.

On Mon, Oct 2, 2017 at 1:42 PM, Tom Lane <tgl@sss.pgh.pa.us> wrote:
I wrote:
> In short, therefore, APFS cannot cope with the way we're using msync().

I experimented with this further by seeing whether the msync() code path
is of any value on Sierra either.  The answer seems to be "no": cloning
a scale-1000 pgbench database takes about 17-18 seconds on my Sierra
laptop using unmodified HEAD, but if I dike out the msync() logic then
it takes 16-17 seconds.  Both numbers jump around a little, but using
msync is strictly worse.

I propose therefore that an appropriate fix is to unconditionally disable
the msync code path on Darwin, as we have already done for Windows.  When
and if Apple changes their kernel so that this path is actually of some
value, we can figure out how to detect whether to use it.

The msync logic seems to date back to this thread:

https://www.postgresql.org/message-id/flat/alpine.DEB.2.10.1506011320000.28433%40sto

wherein Andres opined
>> I think this patch primarily needs:
>> * Benchmarking on FreeBSD/OSX to see whether we should enable the
>>   mmap()/msync(MS_ASYNC) method by default. Unless somebody does so, I'm
>>   inclined to leave it off till then.

but so far as I can tell from the thread, only testing on FreeBSD ever
got done.  So there's no evidence that this was ever beneficial on macOS,
and we now have evidence that it's between counterproductive and
catastrophic depending on which kernel version you look at.

                        regards, tom lane

Re: [HACKERS] Horrible CREATE DATABASE Performance in High Sierra

От
Andres Freund
Дата:
On 2017-10-02 15:42:25 -0400, Tom Lane wrote:
> I wrote:
> > In short, therefore, APFS cannot cope with the way we're using msync().
> 
> I experimented with this further by seeing whether the msync() code path
> is of any value on Sierra either.  The answer seems to be "no": cloning
> a scale-1000 pgbench database takes about 17-18 seconds on my Sierra
> laptop using unmodified HEAD, but if I dike out the msync() logic then
> it takes 16-17 seconds.  Both numbers jump around a little, but using
> msync is strictly worse.

Well, that's only measuring one type of workload. Could you run a normal
pgbench with -P1 or so for 2-3 checkpoint cycles and see how big the
latency differences are?


> I propose therefore that an appropriate fix is to unconditionally disable
> the msync code path on Darwin, as we have already done for Windows.  When
> and if Apple changes their kernel so that this path is actually of some
> value, we can figure out how to detect whether to use it.

I'm inclined to think you're right.

This is a surprisingly massive regression for a new OS release - we're
not the only database that uses msync...


Greetings,

Andres Freund


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

Re: [HACKERS] Horrible CREATE DATABASE Performance in High Sierra

От
Tom Lane
Дата:
Andres Freund <andres@anarazel.de> writes:
> On 2017-10-02 15:42:25 -0400, Tom Lane wrote:
>> I experimented with this further by seeing whether the msync() code path
>> is of any value on Sierra either.  The answer seems to be "no": cloning
>> a scale-1000 pgbench database takes about 17-18 seconds on my Sierra
>> laptop using unmodified HEAD, but if I dike out the msync() logic then
>> it takes 16-17 seconds.  Both numbers jump around a little, but using
>> msync is strictly worse.

> Well, that's only measuring one type of workload. Could you run a normal
> pgbench with -P1 or so for 2-3 checkpoint cycles and see how big the
> latency differences are?

Should I expect there to be any difference at all?  We don't enable
*_flush_after by default on non-Linux platforms.
        regards, tom lane


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

Re: [HACKERS] Horrible CREATE DATABASE Performance in High Sierra

От
Andres Freund
Дата:
On 2017-10-02 15:54:43 -0400, Tom Lane wrote:
> Andres Freund <andres@anarazel.de> writes:
> > On 2017-10-02 15:42:25 -0400, Tom Lane wrote:
> >> I experimented with this further by seeing whether the msync() code path
> >> is of any value on Sierra either.  The answer seems to be "no": cloning
> >> a scale-1000 pgbench database takes about 17-18 seconds on my Sierra
> >> laptop using unmodified HEAD, but if I dike out the msync() logic then
> >> it takes 16-17 seconds.  Both numbers jump around a little, but using
> >> msync is strictly worse.
> 
> > Well, that's only measuring one type of workload. Could you run a normal
> > pgbench with -P1 or so for 2-3 checkpoint cycles and see how big the
> > latency differences are?
> 
> Should I expect there to be any difference at all?  We don't enable
> *_flush_after by default on non-Linux platforms.

Right, you'd have to enable that. But your patch would neuter an
intentionally enabled config too, no?

Greetings,

Andres Freund


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

Re: [HACKERS] Horrible CREATE DATABASE Performance in High Sierra

От
Tom Lane
Дата:
Andres Freund <andres@anarazel.de> writes:
> On 2017-10-02 15:54:43 -0400, Tom Lane wrote:
>> Should I expect there to be any difference at all?  We don't enable
>> *_flush_after by default on non-Linux platforms.

> Right, you'd have to enable that. But your patch would neuter an
> intentionally enabled config too, no?

Well, if you want to suggest a specific scenario to try, I'm happy to.
I am not going to guess as to what will satisfy you.
        regards, tom lane


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

Re: [HACKERS] Horrible CREATE DATABASE Performance in High Sierra

От
Andres Freund
Дата:
On 2017-10-02 15:59:05 -0400, Tom Lane wrote:
> Andres Freund <andres@anarazel.de> writes:
> > On 2017-10-02 15:54:43 -0400, Tom Lane wrote:
> >> Should I expect there to be any difference at all?  We don't enable
> >> *_flush_after by default on non-Linux platforms.
> 
> > Right, you'd have to enable that. But your patch would neuter an
> > intentionally enabled config too, no?
> 
> Well, if you want to suggest a specific scenario to try, I'm happy to.
> I am not going to guess as to what will satisfy you.

To demonstrate what I'm observing here, on linux with a fairly fast ssd:

with:
-c autovacuum_analyze_threshold=2147483647 # to avoid analyze snapshot issue
-c fsync=on
-c synchronous_commit=on
-c shared_buffers=4GB
-c max_wal_size=30GB
-c checkpoint_timeout=30s
-c checkpoint_flush_after=0
-c bgwriter_flush_after=0
and
pgbench -i -s 100 -q

a pgbench -M prepared -c 8 -j 8 -n -P1 -T 100
often has periods like:

synchronous_commit = on:
progress: 73.0 s, 395.0 tps, lat 20.029 ms stddev 4.001
progress: 74.0 s, 289.0 tps, lat 23.730 ms stddev 23.337
progress: 75.0 s, 88.0 tps, lat 104.029 ms stddev 178.038
progress: 76.0 s, 400.0 tps, lat 20.055 ms stddev 4.844
latency average = 21.599 ms
latency stddev = 13.865 ms
tps = 370.346506 (including connections establishing)
tps = 370.372550 (excluding connections establishing)

with synchronous_commit=off those periods are a lot worse:
progress: 57.0 s, 21104.3 tps, lat 0.379 ms stddev 0.193
progress: 58.0 s, 9994.1 tps, lat 0.536 ms stddev 3.140
progress: 59.0 s, 0.0 tps, lat -nan ms stddev -nan
progress: 60.0 s, 0.0 tps, lat -nan ms stddev -nan
progress: 61.0 s, 0.0 tps, lat -nan ms stddev -nan
progress: 62.0 s, 0.0 tps, lat -nan ms stddev -nan
progress: 63.0 s, 3319.6 tps, lat 12.860 ms stddev 253.664
progress: 64.0 s, 20997.0 tps, lat 0.381 ms stddev 0.190
progress: 65.0 s, 20409.1 tps, lat 0.392 ms stddev 0.303
...
latency average = 0.745 ms
latency stddev = 20.470 ms
tps = 10743.555553 (including connections establishing)
tps = 10743.815591 (excluding connections establishing)

contrasting that to checkpoint_flush_after=256kB and
bgwriter_flush_after=512kB:

synchronous_commit=on
worst:
progress: 87.0 s, 298.0 tps, lat 26.874 ms stddev 26.691

latency average = 21.898 ms
latency stddev = 6.416 ms
tps = 365.308180 (including connections establishing)
tps = 365.318793 (excluding connections establishing)

synchronous_commit=on

worst:

progress: 30.0 s, 7026.8 tps, lat 1.137 ms stddev 11.070

latency average = 0.550 ms
latency stddev = 5.599 ms
tps = 14547.842213 (including connections establishing)
tps = 14548.325102 (excluding connections establishing)

If you do the same on rotational disks, the stall periods can get a
*lot* worse (multi-minute stalls with pretty much no activity).


What I'm basically wondering is whether we're screwing somebody over
that made the effort to manually configure this on OSX. It's fairly
obvious we need to find a way to disable the msync() by default.

Greetings,

Andres Freund


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

Re: [HACKERS] Horrible CREATE DATABASE Performance in High Sierra

От
Tom Lane
Дата:
Andres Freund <andres@anarazel.de> writes:
> To demonstrate what I'm observing here, on linux with a fairly fast ssd:
> ...

I tried to replicate this test as closely as I could on the Mac hardware
I have laying about.  I only bothered with the synchronous_commit=off
case, though, since you say that shows the worst effects.  I used the
same parameters you did and the same pgbench settings.  I attach the
pgbench output for six cases, flush_after disabled or enabled on three
different machines:

(A) 2016 MacBook Pro, 2.7GHz i7 + SSD, Sierra, HFS+ file system
(B) 2013 MacBook Pro, 2.3GHz i7 + SSD, High Sierra, APFS file system
(C) 2012 Mac Mini, 2.3GHz i7 + 5400-RPM SATA, High Sierra, HFS+ file system

There is some benefit on the SSD machines, but it's in the range of a
few percent --- clearly, these kernels are not as subject to the basic
I/O-scheduling problem as Linux is.  The spinning-rust machine shows a
nice gain in overall TPS with flush enabled, but it's actually a bit
worse off in terms of the worst-case slowdown --- note that only that
case shows things coming to a complete halt.  It'd be interesting to
check the behavior of a pre-High-Sierra kernel with spinning rust,
but I don't have any remotely modern machine answering that description.

I'm kind of surprised that machine B doesn't show obvious tanking in this
test given that msync() makes it suck so badly at copying a database.
I wonder what is different from the kernel's standpoint ... maybe the
sheer number of different files mmap'd by a single process during the
copy?

> What I'm basically wondering is whether we're screwing somebody over
> that made the effort to manually configure this on OSX. It's fairly
> obvious we need to find a way to disable the msync() by default.

I suspect that anybody who cares about DB performance on macOS will
be running it on SSD-based hardware these days.  The benefit seen on
the Mac Mini would have been worth the trouble of a custom configuration
a few years ago, but I'm dubious that it matters in the real world
anymore.

If we could arrange to not use pg_flush_after in copydir.c on macOS,
I'd be okay with leaving it alone for the configurable flush_after
calls.  But I can't think of a way to do that that wouldn't be a
complete kluge.  I don't much want to do

+#ifndef __darwin__
        pg_flush_data(dstfd, offset, nbytes);
+#endif

but I don't see any better alternative ...

            regards, tom lane

progress: 1.0 s, 8132.9 tps, lat 0.978 ms stddev 0.434
progress: 2.0 s, 8841.0 tps, lat 0.905 ms stddev 0.260
progress: 3.0 s, 9020.1 tps, lat 0.887 ms stddev 0.418
progress: 4.0 s, 9005.9 tps, lat 0.888 ms stddev 0.353
progress: 5.0 s, 9167.1 tps, lat 0.873 ms stddev 0.259
progress: 6.0 s, 9248.1 tps, lat 0.865 ms stddev 0.333
progress: 7.0 s, 9295.0 tps, lat 0.861 ms stddev 0.325
progress: 8.0 s, 9435.0 tps, lat 0.848 ms stddev 0.190
progress: 9.0 s, 9453.0 tps, lat 0.846 ms stddev 0.261
progress: 10.0 s, 9717.1 tps, lat 0.823 ms stddev 0.230
progress: 11.0 s, 9658.8 tps, lat 0.828 ms stddev 0.179
progress: 12.0 s, 9332.8 tps, lat 0.857 ms stddev 0.110
progress: 13.0 s, 9430.2 tps, lat 0.848 ms stddev 0.272
progress: 14.0 s, 9479.0 tps, lat 0.844 ms stddev 0.143
progress: 15.0 s, 9381.0 tps, lat 0.853 ms stddev 0.132
progress: 16.0 s, 9464.8 tps, lat 0.845 ms stddev 0.260
progress: 17.0 s, 9563.3 tps, lat 0.836 ms stddev 0.183
progress: 18.0 s, 9646.0 tps, lat 0.829 ms stddev 0.138
progress: 19.0 s, 9554.0 tps, lat 0.837 ms stddev 0.146
progress: 20.0 s, 9296.0 tps, lat 0.861 ms stddev 0.213
progress: 21.0 s, 9255.9 tps, lat 0.864 ms stddev 0.174
progress: 22.0 s, 9224.1 tps, lat 0.867 ms stddev 0.163
progress: 23.0 s, 9381.2 tps, lat 0.853 ms stddev 0.226
progress: 24.0 s, 9263.9 tps, lat 0.864 ms stddev 0.225
progress: 25.0 s, 9320.8 tps, lat 0.858 ms stddev 0.226
progress: 26.0 s, 9538.8 tps, lat 0.837 ms stddev 0.185
progress: 27.0 s, 9588.3 tps, lat 0.836 ms stddev 0.192
progress: 28.0 s, 9720.0 tps, lat 0.823 ms stddev 0.135
progress: 29.0 s, 9958.0 tps, lat 0.803 ms stddev 0.121
progress: 30.0 s, 9080.1 tps, lat 0.881 ms stddev 0.249
progress: 31.0 s, 8969.8 tps, lat 0.892 ms stddev 0.334
progress: 32.0 s, 9059.9 tps, lat 0.883 ms stddev 0.210
progress: 33.0 s, 9052.2 tps, lat 0.884 ms stddev 0.308
progress: 34.0 s, 8738.9 tps, lat 0.915 ms stddev 0.313
progress: 35.0 s, 8969.0 tps, lat 0.891 ms stddev 0.484
progress: 36.0 s, 8959.8 tps, lat 0.893 ms stddev 0.530
progress: 37.0 s, 8943.3 tps, lat 0.895 ms stddev 0.486
progress: 38.0 s, 8593.7 tps, lat 0.930 ms stddev 0.699
progress: 39.0 s, 8715.1 tps, lat 0.919 ms stddev 0.661
progress: 40.0 s, 8990.3 tps, lat 0.890 ms stddev 0.366
progress: 41.0 s, 8940.8 tps, lat 0.895 ms stddev 0.459
progress: 42.0 s, 9226.0 tps, lat 0.867 ms stddev 0.278
progress: 43.0 s, 9149.2 tps, lat 0.874 ms stddev 0.353
progress: 44.0 s, 9252.0 tps, lat 0.865 ms stddev 0.215
progress: 45.0 s, 7473.9 tps, lat 1.070 ms stddev 2.826
progress: 46.0 s, 9313.9 tps, lat 0.859 ms stddev 0.146
progress: 47.0 s, 9071.2 tps, lat 0.882 ms stddev 0.289
progress: 48.0 s, 9140.9 tps, lat 0.875 ms stddev 0.242
progress: 49.0 s, 9324.0 tps, lat 0.858 ms stddev 0.151
progress: 50.0 s, 9251.8 tps, lat 0.865 ms stddev 0.185
progress: 51.0 s, 9287.3 tps, lat 0.862 ms stddev 0.182
progress: 52.0 s, 9501.9 tps, lat 0.842 ms stddev 0.139
progress: 53.0 s, 9419.1 tps, lat 0.849 ms stddev 0.394
progress: 54.0 s, 9726.0 tps, lat 0.823 ms stddev 0.105
progress: 55.0 s, 9707.9 tps, lat 0.824 ms stddev 0.125
progress: 56.0 s, 9774.0 tps, lat 0.819 ms stddev 0.113
progress: 57.0 s, 9731.1 tps, lat 0.822 ms stddev 0.127
progress: 58.0 s, 9753.1 tps, lat 0.820 ms stddev 0.124
progress: 59.0 s, 9704.7 tps, lat 0.824 ms stddev 0.148
progress: 60.0 s, 9420.1 tps, lat 0.849 ms stddev 0.499
progress: 61.0 s, 9317.1 tps, lat 0.859 ms stddev 0.220
progress: 62.0 s, 9428.9 tps, lat 0.848 ms stddev 0.147
progress: 63.0 s, 9419.8 tps, lat 0.849 ms stddev 0.185
progress: 64.0 s, 9397.1 tps, lat 0.852 ms stddev 0.126
progress: 65.0 s, 8908.1 tps, lat 0.898 ms stddev 0.274
progress: 66.0 s, 8897.0 tps, lat 0.900 ms stddev 0.263
progress: 67.0 s, 8670.1 tps, lat 0.923 ms stddev 0.242
progress: 68.0 s, 8675.8 tps, lat 0.922 ms stddev 0.288
progress: 69.0 s, 8550.1 tps, lat 0.936 ms stddev 0.218
progress: 70.0 s, 8693.0 tps, lat 0.920 ms stddev 0.356
progress: 71.0 s, 8888.0 tps, lat 0.900 ms stddev 0.295
progress: 72.0 s, 9064.8 tps, lat 0.882 ms stddev 0.075
progress: 73.0 s, 9117.1 tps, lat 0.878 ms stddev 0.090
progress: 74.0 s, 9034.0 tps, lat 0.885 ms stddev 0.144
progress: 75.0 s, 7117.1 tps, lat 1.124 ms stddev 1.722
progress: 76.0 s, 8445.9 tps, lat 0.947 ms stddev 0.115
progress: 77.0 s, 8588.9 tps, lat 0.931 ms stddev 0.109
progress: 78.0 s, 8523.1 tps, lat 0.939 ms stddev 0.152
progress: 79.0 s, 8555.8 tps, lat 0.935 ms stddev 0.098
progress: 80.0 s, 8737.3 tps, lat 0.916 ms stddev 0.090
progress: 81.0 s, 9159.9 tps, lat 0.873 ms stddev 0.107
progress: 82.0 s, 9352.1 tps, lat 0.855 ms stddev 0.155
progress: 83.0 s, 9548.0 tps, lat 0.838 ms stddev 0.090
progress: 84.0 s, 9077.8 tps, lat 0.881 ms stddev 0.292
progress: 85.0 s, 9084.2 tps, lat 0.880 ms stddev 0.196
progress: 86.0 s, 9182.1 tps, lat 0.872 ms stddev 0.226
progress: 87.0 s, 9005.8 tps, lat 0.888 ms stddev 0.306
progress: 88.0 s, 9037.9 tps, lat 0.885 ms stddev 0.334
progress: 89.0 s, 9181.2 tps, lat 0.871 ms stddev 0.076
progress: 90.0 s, 8864.8 tps, lat 0.902 ms stddev 0.237
progress: 91.0 s, 8785.2 tps, lat 0.911 ms stddev 0.182
progress: 92.0 s, 9006.9 tps, lat 0.888 ms stddev 0.112
progress: 93.0 s, 9116.0 tps, lat 0.878 ms stddev 0.199
progress: 94.0 s, 9086.8 tps, lat 0.880 ms stddev 0.281
progress: 95.0 s, 8878.5 tps, lat 0.901 ms stddev 0.384
progress: 96.0 s, 8786.6 tps, lat 0.911 ms stddev 0.297
progress: 97.0 s, 8532.0 tps, lat 0.938 ms stddev 0.361
progress: 98.0 s, 8677.9 tps, lat 0.921 ms stddev 0.349
progress: 99.0 s, 8662.1 tps, lat 0.925 ms stddev 0.312
progress: 100.0 s, 8733.1 tps, lat 0.916 ms stddev 0.244
transaction type: <builtin: TPC-B (sort of)>
scaling factor: 100
query mode: prepared
number of clients: 8
number of threads: 8
duration: 100 s
number of transactions actually processed: 911421
latency average = 0.878 ms
latency stddev = 0.402 ms
tps = 9113.930992 (including connections establishing)
tps = 9114.372025 (excluding connections establishing)
progress: 1.0 s, 8764.8 tps, lat 0.908 ms stddev 0.312
progress: 2.0 s, 9752.1 tps, lat 0.820 ms stddev 0.123
progress: 3.0 s, 9901.0 tps, lat 0.808 ms stddev 0.142
progress: 4.0 s, 10136.9 tps, lat 0.789 ms stddev 0.125
progress: 5.0 s, 10233.8 tps, lat 0.782 ms stddev 0.087
progress: 6.0 s, 9964.2 tps, lat 0.803 ms stddev 0.083
progress: 7.0 s, 10362.8 tps, lat 0.772 ms stddev 0.083
progress: 8.0 s, 10367.2 tps, lat 0.772 ms stddev 0.064
progress: 9.0 s, 10316.0 tps, lat 0.775 ms stddev 0.132
progress: 10.0 s, 10294.9 tps, lat 0.777 ms stddev 0.111
progress: 11.0 s, 10312.0 tps, lat 0.776 ms stddev 0.127
progress: 12.0 s, 10328.3 tps, lat 0.775 ms stddev 0.140
progress: 13.0 s, 10369.8 tps, lat 0.772 ms stddev 0.158
progress: 14.0 s, 10261.0 tps, lat 0.780 ms stddev 0.120
progress: 15.0 s, 10142.1 tps, lat 0.789 ms stddev 0.107
progress: 16.0 s, 10161.5 tps, lat 0.787 ms stddev 0.098
progress: 17.0 s, 9943.4 tps, lat 0.805 ms stddev 0.208
progress: 18.0 s, 10614.9 tps, lat 0.754 ms stddev 0.150
progress: 19.0 s, 10694.1 tps, lat 0.748 ms stddev 0.115
progress: 20.0 s, 10729.1 tps, lat 0.746 ms stddev 0.073
progress: 21.0 s, 10403.9 tps, lat 0.769 ms stddev 0.091
progress: 22.0 s, 10605.0 tps, lat 0.754 ms stddev 0.154
progress: 23.0 s, 10759.8 tps, lat 0.743 ms stddev 0.067
progress: 24.0 s, 10732.2 tps, lat 0.745 ms stddev 0.110
progress: 25.0 s, 10218.9 tps, lat 0.783 ms stddev 0.654
progress: 26.0 s, 9830.0 tps, lat 0.814 ms stddev 0.281
progress: 27.0 s, 9819.9 tps, lat 0.815 ms stddev 0.300
progress: 28.0 s, 9893.2 tps, lat 0.809 ms stddev 0.213
progress: 29.0 s, 9873.1 tps, lat 0.810 ms stddev 0.295
progress: 30.0 s, 9766.8 tps, lat 0.819 ms stddev 0.391
progress: 31.0 s, 9821.9 tps, lat 0.814 ms stddev 0.294
progress: 32.0 s, 9791.2 tps, lat 0.817 ms stddev 0.334
progress: 33.0 s, 9863.1 tps, lat 0.811 ms stddev 0.443
progress: 34.0 s, 9875.8 tps, lat 0.810 ms stddev 0.284
progress: 35.0 s, 9917.0 tps, lat 0.807 ms stddev 0.396
progress: 36.0 s, 9884.0 tps, lat 0.809 ms stddev 0.179
progress: 37.0 s, 9880.9 tps, lat 0.810 ms stddev 0.256
progress: 38.0 s, 9879.3 tps, lat 0.810 ms stddev 0.282
progress: 39.0 s, 9617.7 tps, lat 0.832 ms stddev 0.449
progress: 40.0 s, 9619.3 tps, lat 0.832 ms stddev 0.485
progress: 41.0 s, 10134.9 tps, lat 0.789 ms stddev 0.137
progress: 42.0 s, 10137.0 tps, lat 0.789 ms stddev 0.129
progress: 43.0 s, 10234.2 tps, lat 0.782 ms stddev 0.165
progress: 44.0 s, 10267.8 tps, lat 0.779 ms stddev 0.086
progress: 45.0 s, 10123.0 tps, lat 0.790 ms stddev 0.107
progress: 46.0 s, 10209.8 tps, lat 0.784 ms stddev 0.188
progress: 47.0 s, 9951.2 tps, lat 0.804 ms stddev 0.195
progress: 48.0 s, 10341.0 tps, lat 0.774 ms stddev 0.130
progress: 49.0 s, 10422.1 tps, lat 0.768 ms stddev 0.107
progress: 50.0 s, 10456.0 tps, lat 0.765 ms stddev 0.056
progress: 51.0 s, 10382.0 tps, lat 0.770 ms stddev 0.084
progress: 52.0 s, 10133.9 tps, lat 0.789 ms stddev 0.113
progress: 53.0 s, 9836.0 tps, lat 0.813 ms stddev 0.227
progress: 54.0 s, 10211.8 tps, lat 0.783 ms stddev 0.198
progress: 55.0 s, 9668.2 tps, lat 0.827 ms stddev 0.258
progress: 56.0 s, 9533.1 tps, lat 0.839 ms stddev 0.272
progress: 57.0 s, 9641.6 tps, lat 0.830 ms stddev 0.308
progress: 58.0 s, 9626.3 tps, lat 0.831 ms stddev 0.223
progress: 59.0 s, 9647.8 tps, lat 0.829 ms stddev 0.247
progress: 60.0 s, 9089.1 tps, lat 0.880 ms stddev 0.459
progress: 61.0 s, 9240.2 tps, lat 0.866 ms stddev 0.399
progress: 62.0 s, 9431.9 tps, lat 0.848 ms stddev 0.358
progress: 63.0 s, 9469.9 tps, lat 0.845 ms stddev 0.385
progress: 64.0 s, 9654.1 tps, lat 0.829 ms stddev 0.368
progress: 65.0 s, 9561.5 tps, lat 0.837 ms stddev 0.382
progress: 66.0 s, 9705.3 tps, lat 0.824 ms stddev 0.376
progress: 67.0 s, 9651.3 tps, lat 0.829 ms stddev 0.380
progress: 68.0 s, 9651.8 tps, lat 0.829 ms stddev 0.263
progress: 69.0 s, 9277.8 tps, lat 0.862 ms stddev 0.357
progress: 70.0 s, 9232.1 tps, lat 0.867 ms stddev 0.301
progress: 71.0 s, 9440.0 tps, lat 0.848 ms stddev 0.167
progress: 72.0 s, 9059.1 tps, lat 0.883 ms stddev 0.269
progress: 73.0 s, 8913.8 tps, lat 0.897 ms stddev 0.370
progress: 74.0 s, 8727.8 tps, lat 0.917 ms stddev 0.404
progress: 75.0 s, 9374.4 tps, lat 0.853 ms stddev 0.166
progress: 76.0 s, 9082.9 tps, lat 0.881 ms stddev 0.191
progress: 77.0 s, 9177.8 tps, lat 0.872 ms stddev 0.180
progress: 78.0 s, 8891.2 tps, lat 0.900 ms stddev 0.774
progress: 79.0 s, 9265.7 tps, lat 0.863 ms stddev 0.128
progress: 80.0 s, 9117.1 tps, lat 0.877 ms stddev 0.099
progress: 81.0 s, 9080.0 tps, lat 0.881 ms stddev 0.114
progress: 82.0 s, 9144.3 tps, lat 0.875 ms stddev 0.182
progress: 83.0 s, 9207.0 tps, lat 0.869 ms stddev 0.109
progress: 84.0 s, 9158.8 tps, lat 0.874 ms stddev 0.132
progress: 85.0 s, 8498.1 tps, lat 0.941 ms stddev 0.584
progress: 86.0 s, 8454.0 tps, lat 0.946 ms stddev 0.408
progress: 87.0 s, 7820.1 tps, lat 1.023 ms stddev 0.612
progress: 88.0 s, 7918.7 tps, lat 1.010 ms stddev 0.550
progress: 89.0 s, 7748.0 tps, lat 1.033 ms stddev 0.692
progress: 90.0 s, 7650.1 tps, lat 1.046 ms stddev 0.844
progress: 91.0 s, 8200.0 tps, lat 0.975 ms stddev 0.541
progress: 92.0 s, 8365.2 tps, lat 0.956 ms stddev 0.409
progress: 93.0 s, 8501.0 tps, lat 0.941 ms stddev 0.297
progress: 94.0 s, 8652.9 tps, lat 0.925 ms stddev 0.382
progress: 95.0 s, 8760.1 tps, lat 0.913 ms stddev 0.569
progress: 96.0 s, 8760.9 tps, lat 0.913 ms stddev 0.445
progress: 97.0 s, 8660.0 tps, lat 0.923 ms stddev 0.239
progress: 98.0 s, 8374.8 tps, lat 0.955 ms stddev 0.340
progress: 99.0 s, 8342.3 tps, lat 0.959 ms stddev 0.411
progress: 100.0 s, 8513.8 tps, lat 0.940 ms stddev 0.265
transaction type: <builtin: TPC-B (sort of)>
scaling factor: 100
query mode: prepared
number of clients: 8
number of threads: 8
duration: 100 s
number of transactions actually processed: 959494
latency average = 0.834 ms
latency stddev = 0.312 ms
tps = 9594.823200 (including connections establishing)
tps = 9595.312663 (excluding connections establishing)
progress: 1.0 s, 7539.0 tps, lat 1.057 ms stddev 0.281
progress: 2.0 s, 7877.5 tps, lat 1.016 ms stddev 0.315
progress: 3.0 s, 7964.3 tps, lat 1.004 ms stddev 0.162
progress: 4.0 s, 7976.2 tps, lat 1.003 ms stddev 0.210
progress: 5.0 s, 8004.8 tps, lat 1.000 ms stddev 0.213
progress: 6.0 s, 7509.0 tps, lat 1.065 ms stddev 0.347
progress: 7.0 s, 7745.9 tps, lat 1.033 ms stddev 0.214
progress: 8.0 s, 7696.4 tps, lat 1.039 ms stddev 0.426
progress: 9.0 s, 7502.7 tps, lat 1.067 ms stddev 0.454
progress: 10.0 s, 7541.8 tps, lat 1.061 ms stddev 0.681
progress: 11.0 s, 7570.0 tps, lat 1.057 ms stddev 0.379
progress: 12.0 s, 7610.8 tps, lat 1.051 ms stddev 0.369
progress: 13.0 s, 7902.3 tps, lat 1.013 ms stddev 0.129
progress: 14.0 s, 7950.9 tps, lat 1.006 ms stddev 0.195
progress: 15.0 s, 7998.0 tps, lat 1.000 ms stddev 0.183
progress: 16.0 s, 8039.3 tps, lat 0.995 ms stddev 0.226
progress: 17.0 s, 8119.9 tps, lat 0.985 ms stddev 0.149
progress: 18.0 s, 8126.9 tps, lat 0.984 ms stddev 0.126
progress: 19.0 s, 8122.1 tps, lat 0.985 ms stddev 0.186
progress: 20.0 s, 7592.1 tps, lat 1.053 ms stddev 0.379
progress: 21.0 s, 6043.1 tps, lat 1.325 ms stddev 6.291
progress: 22.0 s, 8248.7 tps, lat 0.970 ms stddev 0.141
progress: 23.0 s, 8286.2 tps, lat 0.965 ms stddev 0.077
progress: 24.0 s, 8336.1 tps, lat 0.960 ms stddev 0.099
progress: 25.0 s, 8328.7 tps, lat 0.960 ms stddev 0.073
progress: 26.0 s, 8356.2 tps, lat 0.958 ms stddev 0.073
progress: 27.0 s, 8380.9 tps, lat 0.954 ms stddev 0.096
progress: 28.0 s, 8382.2 tps, lat 0.954 ms stddev 0.094
progress: 29.0 s, 8469.0 tps, lat 0.944 ms stddev 0.109
progress: 30.0 s, 8506.6 tps, lat 0.940 ms stddev 0.070
progress: 31.0 s, 8544.1 tps, lat 0.936 ms stddev 0.080
progress: 32.0 s, 8529.4 tps, lat 0.938 ms stddev 0.086
progress: 33.0 s, 8545.6 tps, lat 0.936 ms stddev 0.084
progress: 34.0 s, 8508.0 tps, lat 0.940 ms stddev 0.099
progress: 35.0 s, 8463.1 tps, lat 0.945 ms stddev 0.138
progress: 36.0 s, 7576.1 tps, lat 1.056 ms stddev 0.343
progress: 37.0 s, 8093.0 tps, lat 0.988 ms stddev 0.189
progress: 38.0 s, 8098.7 tps, lat 0.988 ms stddev 0.334
progress: 39.0 s, 8137.3 tps, lat 0.983 ms stddev 0.215
progress: 40.0 s, 7937.9 tps, lat 1.008 ms stddev 0.184
progress: 41.0 s, 7804.0 tps, lat 1.025 ms stddev 0.407
progress: 42.0 s, 7798.9 tps, lat 1.026 ms stddev 0.173
progress: 43.0 s, 7902.1 tps, lat 1.012 ms stddev 0.205
progress: 44.0 s, 7869.6 tps, lat 1.016 ms stddev 0.432
progress: 45.0 s, 7924.2 tps, lat 1.010 ms stddev 0.130
progress: 46.0 s, 8165.9 tps, lat 0.980 ms stddev 0.141
progress: 47.0 s, 8052.1 tps, lat 0.994 ms stddev 0.351
progress: 48.0 s, 7926.1 tps, lat 1.009 ms stddev 0.207
progress: 49.0 s, 7976.0 tps, lat 1.003 ms stddev 0.190
progress: 50.0 s, 7550.6 tps, lat 1.059 ms stddev 0.555
progress: 51.0 s, 6498.3 tps, lat 1.232 ms stddev 3.693
progress: 52.0 s, 8148.9 tps, lat 0.981 ms stddev 0.112
progress: 53.0 s, 8092.3 tps, lat 0.989 ms stddev 0.072
progress: 54.0 s, 8082.5 tps, lat 0.990 ms stddev 0.109
progress: 55.0 s, 8467.6 tps, lat 0.945 ms stddev 0.659
progress: 56.0 s, 8948.7 tps, lat 0.894 ms stddev 0.074
progress: 57.0 s, 8877.0 tps, lat 0.901 ms stddev 0.087
progress: 58.0 s, 8826.2 tps, lat 0.906 ms stddev 0.064
progress: 59.0 s, 8820.8 tps, lat 0.907 ms stddev 0.078
progress: 60.0 s, 8823.9 tps, lat 0.907 ms stddev 0.077
progress: 61.0 s, 8841.2 tps, lat 0.905 ms stddev 0.112
progress: 62.0 s, 8840.9 tps, lat 0.905 ms stddev 0.103
progress: 63.0 s, 8857.1 tps, lat 0.903 ms stddev 0.096
progress: 64.0 s, 8877.8 tps, lat 0.901 ms stddev 0.079
progress: 65.0 s, 8892.8 tps, lat 0.900 ms stddev 0.060
progress: 66.0 s, 8039.3 tps, lat 0.995 ms stddev 0.283
progress: 67.0 s, 8290.1 tps, lat 0.965 ms stddev 0.121
progress: 68.0 s, 8110.0 tps, lat 0.986 ms stddev 0.430
progress: 69.0 s, 8346.1 tps, lat 0.959 ms stddev 0.336
progress: 70.0 s, 8258.7 tps, lat 0.969 ms stddev 0.311
progress: 71.0 s, 8120.2 tps, lat 0.985 ms stddev 0.445
progress: 72.0 s, 7867.1 tps, lat 1.017 ms stddev 0.641
progress: 73.0 s, 7781.7 tps, lat 1.028 ms stddev 0.313
progress: 74.0 s, 8221.8 tps, lat 0.973 ms stddev 0.211
progress: 75.0 s, 7901.2 tps, lat 1.012 ms stddev 0.418
progress: 76.0 s, 8161.1 tps, lat 0.980 ms stddev 0.377
progress: 77.0 s, 8209.7 tps, lat 0.975 ms stddev 0.176
progress: 78.0 s, 8280.4 tps, lat 0.966 ms stddev 0.149
progress: 79.0 s, 8472.7 tps, lat 0.944 ms stddev 0.319
progress: 80.0 s, 7980.1 tps, lat 1.002 ms stddev 0.793
progress: 81.0 s, 6764.8 tps, lat 1.183 ms stddev 1.518
progress: 82.0 s, 8119.1 tps, lat 0.985 ms stddev 0.497
progress: 83.0 s, 8659.1 tps, lat 0.924 ms stddev 0.091
progress: 84.0 s, 8778.0 tps, lat 0.911 ms stddev 0.113
progress: 85.0 s, 8831.2 tps, lat 0.906 ms stddev 0.075
progress: 86.0 s, 8832.5 tps, lat 0.906 ms stddev 0.068
progress: 87.0 s, 8657.4 tps, lat 0.924 ms stddev 0.087
progress: 88.0 s, 8601.8 tps, lat 0.930 ms stddev 0.130
progress: 89.0 s, 8853.8 tps, lat 0.903 ms stddev 0.078
progress: 90.0 s, 8869.3 tps, lat 0.902 ms stddev 0.079
progress: 91.0 s, 8868.0 tps, lat 0.902 ms stddev 0.096
progress: 92.0 s, 8716.8 tps, lat 0.918 ms stddev 0.090
progress: 93.0 s, 8644.3 tps, lat 0.925 ms stddev 0.112
progress: 94.0 s, 8915.9 tps, lat 0.898 ms stddev 0.063
progress: 95.0 s, 8915.1 tps, lat 0.897 ms stddev 0.114
progress: 96.0 s, 8224.1 tps, lat 0.972 ms stddev 0.371
progress: 97.0 s, 8292.9 tps, lat 0.965 ms stddev 0.413
progress: 98.0 s, 8526.8 tps, lat 0.938 ms stddev 0.143
progress: 99.0 s, 8520.0 tps, lat 0.939 ms stddev 0.156
progress: 100.0 s, 8560.0 tps, lat 0.934 ms stddev 0.155
transaction type: <builtin: TPC-B (sort of)>
scaling factor: 100
query mode: prepared
number of clients: 8
number of threads: 8
duration: 100 s
number of transactions actually processed: 820226
latency average = 0.975 ms
latency stddev = 0.701 ms
tps = 8202.177302 (including connections establishing)
tps = 8202.483978 (excluding connections establishing)
progress: 1.0 s, 8351.6 tps, lat 0.954 ms stddev 0.195
progress: 2.0 s, 8570.1 tps, lat 0.933 ms stddev 0.223
progress: 3.0 s, 8670.3 tps, lat 0.923 ms stddev 0.134
progress: 4.0 s, 8720.0 tps, lat 0.918 ms stddev 0.108
progress: 5.0 s, 8761.7 tps, lat 0.913 ms stddev 0.095
progress: 6.0 s, 7857.8 tps, lat 1.017 ms stddev 0.981
progress: 7.0 s, 8663.5 tps, lat 0.924 ms stddev 0.169
progress: 8.0 s, 8834.8 tps, lat 0.906 ms stddev 0.067
progress: 9.0 s, 8833.8 tps, lat 0.905 ms stddev 0.069
progress: 10.0 s, 8878.2 tps, lat 0.901 ms stddev 0.083
progress: 11.0 s, 8879.1 tps, lat 0.901 ms stddev 0.124
progress: 12.0 s, 8888.1 tps, lat 0.900 ms stddev 0.092
progress: 13.0 s, 8921.9 tps, lat 0.897 ms stddev 0.081
progress: 14.0 s, 8977.6 tps, lat 0.891 ms stddev 0.071
progress: 15.0 s, 8971.6 tps, lat 0.892 ms stddev 0.067
progress: 16.0 s, 8978.8 tps, lat 0.891 ms stddev 0.064
progress: 17.0 s, 8984.9 tps, lat 0.890 ms stddev 0.067
progress: 18.0 s, 9027.9 tps, lat 0.886 ms stddev 0.104
progress: 19.0 s, 9036.8 tps, lat 0.885 ms stddev 0.098
progress: 20.0 s, 9043.2 tps, lat 0.885 ms stddev 0.105
progress: 21.0 s, 9064.7 tps, lat 0.883 ms stddev 0.081
progress: 22.0 s, 8654.6 tps, lat 0.924 ms stddev 0.152
progress: 23.0 s, 8717.7 tps, lat 0.918 ms stddev 0.189
progress: 24.0 s, 8736.0 tps, lat 0.916 ms stddev 0.150
progress: 25.0 s, 8758.0 tps, lat 0.913 ms stddev 0.157
progress: 26.0 s, 7967.7 tps, lat 1.004 ms stddev 0.430
progress: 27.0 s, 7766.2 tps, lat 1.030 ms stddev 0.923
progress: 28.0 s, 8737.1 tps, lat 0.916 ms stddev 0.173
progress: 29.0 s, 8812.1 tps, lat 0.908 ms stddev 0.130
progress: 30.0 s, 8736.9 tps, lat 0.916 ms stddev 0.168
progress: 31.0 s, 8502.0 tps, lat 0.941 ms stddev 0.386
progress: 32.0 s, 8541.1 tps, lat 0.937 ms stddev 0.289
progress: 33.0 s, 8853.1 tps, lat 0.904 ms stddev 0.106
progress: 34.0 s, 8888.9 tps, lat 0.900 ms stddev 0.100
progress: 35.0 s, 8870.0 tps, lat 0.901 ms stddev 0.126
progress: 36.0 s, 7873.9 tps, lat 1.016 ms stddev 1.000
progress: 37.0 s, 7563.2 tps, lat 1.058 ms stddev 0.792
progress: 38.0 s, 9017.9 tps, lat 0.887 ms stddev 0.074
progress: 39.0 s, 9017.3 tps, lat 0.887 ms stddev 0.075
progress: 40.0 s, 9006.0 tps, lat 0.888 ms stddev 0.096
progress: 41.0 s, 9010.9 tps, lat 0.888 ms stddev 0.103
progress: 42.0 s, 8990.0 tps, lat 0.890 ms stddev 0.080
progress: 43.0 s, 8953.1 tps, lat 0.894 ms stddev 0.078
progress: 44.0 s, 8904.9 tps, lat 0.898 ms stddev 0.068
progress: 45.0 s, 8908.0 tps, lat 0.898 ms stddev 0.063
progress: 46.0 s, 8915.9 tps, lat 0.897 ms stddev 0.094
progress: 47.0 s, 8840.0 tps, lat 0.905 ms stddev 0.098
progress: 48.0 s, 8871.9 tps, lat 0.902 ms stddev 0.076
progress: 49.0 s, 8850.8 tps, lat 0.904 ms stddev 0.106
progress: 50.0 s, 8851.1 tps, lat 0.904 ms stddev 0.097
progress: 51.0 s, 8712.9 tps, lat 0.918 ms stddev 0.112
progress: 52.0 s, 7982.3 tps, lat 1.002 ms stddev 0.526
progress: 53.0 s, 8386.1 tps, lat 0.954 ms stddev 0.171
progress: 54.0 s, 8406.8 tps, lat 0.952 ms stddev 0.172
progress: 55.0 s, 8433.8 tps, lat 0.949 ms stddev 0.180
progress: 56.0 s, 8401.4 tps, lat 0.951 ms stddev 0.246
progress: 57.0 s, 8167.9 tps, lat 0.981 ms stddev 0.165
progress: 58.0 s, 8078.1 tps, lat 0.987 ms stddev 0.410
progress: 59.0 s, 8080.9 tps, lat 0.993 ms stddev 0.522
progress: 60.0 s, 8157.1 tps, lat 0.981 ms stddev 0.125
progress: 61.0 s, 8177.9 tps, lat 0.978 ms stddev 0.342
progress: 62.0 s, 8024.7 tps, lat 0.997 ms stddev 0.124
progress: 63.0 s, 8283.3 tps, lat 0.966 ms stddev 0.116
progress: 64.0 s, 8412.9 tps, lat 0.951 ms stddev 0.152
progress: 65.0 s, 8555.1 tps, lat 0.935 ms stddev 0.108
progress: 66.0 s, 7775.2 tps, lat 1.029 ms stddev 0.803
progress: 67.0 s, 7267.8 tps, lat 1.100 ms stddev 0.998
progress: 68.0 s, 8364.0 tps, lat 0.957 ms stddev 0.246
progress: 69.0 s, 8484.2 tps, lat 0.943 ms stddev 0.097
progress: 70.0 s, 8689.3 tps, lat 0.920 ms stddev 0.101
progress: 71.0 s, 8747.7 tps, lat 0.915 ms stddev 0.071
progress: 72.0 s, 8636.0 tps, lat 0.926 ms stddev 0.092
progress: 73.0 s, 8541.0 tps, lat 0.937 ms stddev 0.093
progress: 74.0 s, 8551.8 tps, lat 0.935 ms stddev 0.084
progress: 75.0 s, 8651.2 tps, lat 0.925 ms stddev 0.089
progress: 76.0 s, 8840.1 tps, lat 0.905 ms stddev 0.063
progress: 77.0 s, 8697.5 tps, lat 0.920 ms stddev 0.094
progress: 78.0 s, 8614.4 tps, lat 0.929 ms stddev 0.062
progress: 79.0 s, 8597.7 tps, lat 0.930 ms stddev 0.087
progress: 80.0 s, 8860.3 tps, lat 0.903 ms stddev 0.074
progress: 81.0 s, 8876.8 tps, lat 0.901 ms stddev 0.071
progress: 82.0 s, 8158.2 tps, lat 0.981 ms stddev 0.196
progress: 83.0 s, 8207.9 tps, lat 0.974 ms stddev 0.151
progress: 84.0 s, 8354.0 tps, lat 0.958 ms stddev 0.124
progress: 85.0 s, 8475.9 tps, lat 0.944 ms stddev 0.159
progress: 86.0 s, 8280.1 tps, lat 0.966 ms stddev 0.213
progress: 87.0 s, 7989.2 tps, lat 1.001 ms stddev 0.277
progress: 88.0 s, 8028.8 tps, lat 0.996 ms stddev 0.501
progress: 89.0 s, 8190.9 tps, lat 0.977 ms stddev 0.562
progress: 90.0 s, 7874.3 tps, lat 1.016 ms stddev 0.438
progress: 91.0 s, 8056.8 tps, lat 0.993 ms stddev 0.385
progress: 92.0 s, 7904.1 tps, lat 1.012 ms stddev 0.273
progress: 93.0 s, 8369.9 tps, lat 0.956 ms stddev 0.178
progress: 94.0 s, 8624.1 tps, lat 0.928 ms stddev 0.104
progress: 95.0 s, 8636.8 tps, lat 0.926 ms stddev 0.123
progress: 96.0 s, 7606.1 tps, lat 1.050 ms stddev 0.918
progress: 97.0 s, 7754.1 tps, lat 1.034 ms stddev 0.716
progress: 98.0 s, 8770.9 tps, lat 0.912 ms stddev 0.084
progress: 99.0 s, 8495.0 tps, lat 0.942 ms stddev 0.094
progress: 100.0 s, 8748.0 tps, lat 0.915 ms stddev 0.078
transaction type: <builtin: TPC-B (sort of)>
scaling factor: 100
query mode: prepared
number of clients: 8
number of threads: 8
duration: 100 s
number of transactions actually processed: 854022
latency average = 0.937 ms
latency stddev = 0.307 ms
tps = 8540.098382 (including connections establishing)
tps = 8540.430623 (excluding connections establishing)
progress: 1.0 s, 354.3 tps, lat 21.766 ms stddev 29.726
progress: 2.0 s, 392.8 tps, lat 20.598 ms stddev 29.885
progress: 3.0 s, 397.5 tps, lat 19.979 ms stddev 28.740
progress: 4.0 s, 358.3 tps, lat 22.364 ms stddev 36.618
progress: 5.0 s, 385.1 tps, lat 20.769 ms stddev 29.758
progress: 6.0 s, 437.7 tps, lat 18.215 ms stddev 27.911
progress: 7.0 s, 411.3 tps, lat 18.730 ms stddev 29.058
progress: 8.0 s, 323.2 tps, lat 24.821 ms stddev 46.625
progress: 9.0 s, 242.1 tps, lat 31.820 ms stddev 49.603
progress: 10.0 s, 232.1 tps, lat 37.136 ms stddev 61.432
progress: 11.0 s, 368.1 tps, lat 21.656 ms stddev 34.450
progress: 12.0 s, 423.9 tps, lat 18.847 ms stddev 30.608
progress: 13.0 s, 302.5 tps, lat 24.763 ms stddev 41.273
progress: 14.0 s, 375.0 tps, lat 22.814 ms stddev 38.451
progress: 15.0 s, 363.7 tps, lat 22.077 ms stddev 35.401
progress: 16.0 s, 348.3 tps, lat 22.275 ms stddev 39.074
progress: 17.0 s, 335.3 tps, lat 23.928 ms stddev 35.553
progress: 18.0 s, 341.9 tps, lat 23.947 ms stddev 39.833
progress: 19.0 s, 306.6 tps, lat 25.439 ms stddev 40.398
progress: 20.0 s, 342.4 tps, lat 23.327 ms stddev 37.046
progress: 21.0 s, 319.6 tps, lat 25.505 ms stddev 37.317
progress: 22.0 s, 393.5 tps, lat 20.337 ms stddev 30.012
progress: 23.0 s, 322.0 tps, lat 25.075 ms stddev 36.730
progress: 24.0 s, 300.7 tps, lat 25.030 ms stddev 45.739
progress: 25.0 s, 347.6 tps, lat 24.351 ms stddev 35.704
progress: 26.0 s, 285.5 tps, lat 27.072 ms stddev 39.857
progress: 27.0 s, 347.3 tps, lat 23.644 ms stddev 33.821
progress: 28.0 s, 290.8 tps, lat 27.130 ms stddev 37.158
progress: 29.0 s, 311.1 tps, lat 24.045 ms stddev 31.689
progress: 30.0 s, 233.2 tps, lat 33.000 ms stddev 45.583
progress: 31.0 s, 170.9 tps, lat 28.621 ms stddev 33.804
progress: 32.0 s, 230.1 tps, lat 52.175 ms stddev 178.444
progress: 33.0 s, 261.0 tps, lat 30.708 ms stddev 35.658
progress: 34.0 s, 285.6 tps, lat 27.915 ms stddev 35.451
progress: 35.0 s, 257.1 tps, lat 31.419 ms stddev 36.798
progress: 36.0 s, 270.9 tps, lat 27.359 ms stddev 31.285
progress: 37.0 s, 247.2 tps, lat 34.842 ms stddev 37.615
progress: 38.0 s, 252.0 tps, lat 31.653 ms stddev 33.157
progress: 39.0 s, 226.9 tps, lat 35.143 ms stddev 38.065
progress: 40.0 s, 239.8 tps, lat 33.571 ms stddev 35.331
progress: 41.0 s, 252.8 tps, lat 31.851 ms stddev 37.371
progress: 42.0 s, 231.7 tps, lat 32.731 ms stddev 36.321
progress: 43.0 s, 246.8 tps, lat 33.477 ms stddev 33.611
progress: 44.0 s, 220.4 tps, lat 35.395 ms stddev 39.597
progress: 45.0 s, 246.1 tps, lat 33.283 ms stddev 34.154
progress: 46.0 s, 220.3 tps, lat 36.708 ms stddev 47.081
progress: 47.0 s, 237.3 tps, lat 33.601 ms stddev 39.181
progress: 48.0 s, 262.3 tps, lat 30.803 ms stddev 34.408
progress: 49.0 s, 245.6 tps, lat 32.056 ms stddev 39.298
progress: 50.0 s, 262.0 tps, lat 30.839 ms stddev 42.311
progress: 51.0 s, 265.0 tps, lat 29.799 ms stddev 35.100
progress: 52.0 s, 268.7 tps, lat 30.187 ms stddev 35.988
progress: 53.0 s, 293.8 tps, lat 27.069 ms stddev 33.729
progress: 54.0 s, 293.2 tps, lat 27.442 ms stddev 33.639
progress: 55.0 s, 290.1 tps, lat 26.964 ms stddev 32.010
progress: 56.0 s, 268.0 tps, lat 29.409 ms stddev 33.619
progress: 57.0 s, 284.4 tps, lat 29.508 ms stddev 36.257
progress: 58.0 s, 272.7 tps, lat 28.852 ms stddev 34.548
progress: 59.0 s, 270.8 tps, lat 29.433 ms stddev 34.788
progress: 60.0 s, 255.2 tps, lat 31.209 ms stddev 36.783
progress: 61.0 s, 267.5 tps, lat 29.800 ms stddev 36.414
progress: 62.0 s, 235.7 tps, lat 32.803 ms stddev 38.401
progress: 63.0 s, 216.8 tps, lat 38.299 ms stddev 47.885
progress: 64.0 s, 193.2 tps, lat 41.626 ms stddev 48.702
progress: 65.0 s, 231.6 tps, lat 34.831 ms stddev 38.802
progress: 66.0 s, 286.5 tps, lat 27.057 ms stddev 35.402
progress: 67.0 s, 103.9 tps, lat 76.576 ms stddev 109.537
progress: 68.0 s, 75.9 tps, lat 59.812 ms stddev 71.970
progress: 69.0 s, 41.0 tps, lat 267.068 ms stddev 497.115
progress: 70.0 s, 80.1 tps, lat 45.052 ms stddev 63.493
progress: 71.0 s, 9.0 tps, lat 463.164 ms stddev 87.456
progress: 72.0 s, 3.0 tps, lat 308.381 ms stddev 0.038
progress: 73.0 s, 6.0 tps, lat 785.111 ms stddev 8.333
progress: 74.0 s, 17.0 tps, lat 185.567 ms stddev 273.242
progress: 75.0 s, 87.4 tps, lat 367.291 ms stddev 1218.343
progress: 76.0 s, 184.0 tps, lat 40.505 ms stddev 52.605
progress: 77.0 s, 214.2 tps, lat 40.240 ms stddev 57.039
progress: 78.0 s, 119.8 tps, lat 61.882 ms stddev 83.831
progress: 79.0 s, 123.5 tps, lat 70.183 ms stddev 86.087
progress: 80.0 s, 79.7 tps, lat 94.124 ms stddev 95.042
progress: 81.0 s, 143.7 tps, lat 59.278 ms stddev 72.276
progress: 82.0 s, 188.1 tps, lat 43.177 ms stddev 56.089
progress: 83.0 s, 185.9 tps, lat 40.520 ms stddev 47.181
progress: 84.0 s, 266.0 tps, lat 31.613 ms stddev 44.067
progress: 85.0 s, 221.0 tps, lat 36.609 ms stddev 43.833
progress: 86.0 s, 261.7 tps, lat 30.018 ms stddev 39.138
progress: 87.0 s, 203.8 tps, lat 38.891 ms stddev 48.115
progress: 88.0 s, 234.5 tps, lat 32.442 ms stddev 39.910
progress: 89.0 s, 243.8 tps, lat 33.841 ms stddev 43.618
progress: 90.0 s, 269.0 tps, lat 30.627 ms stddev 42.103
progress: 91.0 s, 231.2 tps, lat 34.578 ms stddev 46.400
progress: 92.0 s, 253.8 tps, lat 31.450 ms stddev 39.050
progress: 93.0 s, 253.7 tps, lat 31.585 ms stddev 44.355
progress: 94.0 s, 207.3 tps, lat 37.591 ms stddev 48.821
progress: 95.0 s, 270.0 tps, lat 30.512 ms stddev 45.060
progress: 96.0 s, 228.7 tps, lat 32.807 ms stddev 47.779
progress: 97.0 s, 282.5 tps, lat 29.513 ms stddev 40.532
progress: 98.0 s, 231.7 tps, lat 34.281 ms stddev 42.550
progress: 99.0 s, 230.3 tps, lat 34.990 ms stddev 42.964
progress: 100.0 s, 333.4 tps, lat 24.152 ms stddev 42.938
transaction type: <builtin: TPC-B (sort of)>
scaling factor: 100
query mode: prepared
number of clients: 8
number of threads: 8
duration: 100 s
number of transactions actually processed: 25148
latency average = 31.830 ms
latency stddev = 91.203 ms
tps = 251.245585 (including connections establishing)
tps = 251.258855 (excluding connections establishing)
progress: 1.0 s, 352.0 tps, lat 21.762 ms stddev 32.356
progress: 2.0 s, 371.0 tps, lat 21.641 ms stddev 33.530
progress: 3.0 s, 310.5 tps, lat 25.906 ms stddev 39.120
progress: 4.0 s, 333.8 tps, lat 23.855 ms stddev 42.609
progress: 5.0 s, 327.6 tps, lat 24.323 ms stddev 39.091
progress: 6.0 s, 373.2 tps, lat 21.448 ms stddev 36.218
progress: 7.0 s, 385.0 tps, lat 20.678 ms stddev 33.351
progress: 8.0 s, 448.1 tps, lat 18.270 ms stddev 32.740
progress: 9.0 s, 401.1 tps, lat 19.470 ms stddev 31.664
progress: 10.0 s, 380.9 tps, lat 21.224 ms stddev 34.309
progress: 11.0 s, 381.4 tps, lat 20.806 ms stddev 34.993
progress: 12.0 s, 390.7 tps, lat 20.764 ms stddev 35.934
progress: 13.0 s, 356.6 tps, lat 21.393 ms stddev 30.197
progress: 14.0 s, 240.8 tps, lat 32.856 ms stddev 52.230
progress: 15.0 s, 252.0 tps, lat 32.258 ms stddev 49.030
progress: 16.0 s, 233.7 tps, lat 35.707 ms stddev 50.516
progress: 17.0 s, 232.3 tps, lat 32.167 ms stddev 48.061
progress: 18.0 s, 303.8 tps, lat 27.647 ms stddev 45.626
progress: 19.0 s, 246.1 tps, lat 31.823 ms stddev 52.748
progress: 20.0 s, 162.0 tps, lat 50.026 ms stddev 106.085
progress: 21.0 s, 358.2 tps, lat 21.578 ms stddev 34.363
progress: 22.0 s, 344.0 tps, lat 24.137 ms stddev 40.426
progress: 23.0 s, 192.0 tps, lat 34.469 ms stddev 54.521
progress: 24.0 s, 75.0 tps, lat 24.232 ms stddev 49.533
progress: 25.0 s, 62.0 tps, lat 139.837 ms stddev 395.765
progress: 26.0 s, 36.1 tps, lat 43.667 ms stddev 62.057
progress: 27.0 s, 0.0 tps, lat nan ms stddev nan
progress: 28.0 s, 0.0 tps, lat nan ms stddev nan
progress: 29.0 s, 0.0 tps, lat nan ms stddev nan
progress: 30.0 s, 140.7 tps, lat 310.746 ms stddev 1197.762
progress: 31.0 s, 300.4 tps, lat 30.994 ms stddev 62.987
progress: 32.0 s, 160.2 tps, lat 51.990 ms stddev 101.348
progress: 33.0 s, 189.0 tps, lat 41.455 ms stddev 74.822
progress: 34.0 s, 271.1 tps, lat 28.419 ms stddev 49.722
progress: 35.0 s, 206.2 tps, lat 40.413 ms stddev 63.814
progress: 36.0 s, 247.6 tps, lat 29.613 ms stddev 52.700
progress: 37.0 s, 294.0 tps, lat 28.875 ms stddev 50.316
progress: 38.0 s, 295.0 tps, lat 25.199 ms stddev 48.377
progress: 39.0 s, 339.0 tps, lat 25.873 ms stddev 50.000
progress: 40.0 s, 255.2 tps, lat 29.400 ms stddev 57.135
progress: 41.0 s, 360.8 tps, lat 23.759 ms stddev 48.464
progress: 42.0 s, 350.2 tps, lat 20.988 ms stddev 38.153
progress: 43.0 s, 382.9 tps, lat 22.607 ms stddev 43.680
progress: 44.0 s, 395.0 tps, lat 19.781 ms stddev 36.094
progress: 45.0 s, 491.4 tps, lat 16.590 ms stddev 33.418
progress: 46.0 s, 342.0 tps, lat 23.228 ms stddev 42.476
progress: 47.0 s, 341.3 tps, lat 22.412 ms stddev 41.914
progress: 48.0 s, 393.0 tps, lat 21.357 ms stddev 39.458
progress: 49.0 s, 420.9 tps, lat 18.698 ms stddev 39.495
progress: 50.0 s, 319.6 tps, lat 25.394 ms stddev 52.347
progress: 51.0 s, 486.0 tps, lat 16.550 ms stddev 33.217
progress: 52.0 s, 457.7 tps, lat 17.080 ms stddev 34.671
progress: 53.0 s, 384.1 tps, lat 21.050 ms stddev 41.342
progress: 54.0 s, 357.4 tps, lat 21.150 ms stddev 41.126
progress: 55.0 s, 367.5 tps, lat 22.050 ms stddev 39.252
progress: 56.0 s, 456.6 tps, lat 18.294 ms stddev 40.164
progress: 57.0 s, 426.6 tps, lat 19.150 ms stddev 37.671
progress: 58.0 s, 432.0 tps, lat 17.904 ms stddev 38.257
progress: 59.0 s, 482.0 tps, lat 17.073 ms stddev 34.926
progress: 60.0 s, 293.7 tps, lat 26.895 ms stddev 55.940
progress: 61.0 s, 273.3 tps, lat 28.227 ms stddev 52.700
progress: 62.0 s, 211.8 tps, lat 38.000 ms stddev 75.282
progress: 63.0 s, 473.4 tps, lat 17.457 ms stddev 37.566
progress: 64.0 s, 488.2 tps, lat 16.345 ms stddev 33.965
progress: 65.0 s, 578.9 tps, lat 14.097 ms stddev 34.299
progress: 66.0 s, 408.0 tps, lat 19.160 ms stddev 41.270
progress: 67.0 s, 591.3 tps, lat 13.575 ms stddev 31.911
progress: 68.0 s, 426.3 tps, lat 18.522 ms stddev 38.261
progress: 69.0 s, 518.1 tps, lat 15.755 ms stddev 46.619
progress: 70.0 s, 613.1 tps, lat 12.978 ms stddev 32.936
progress: 71.0 s, 560.0 tps, lat 13.724 ms stddev 32.372
progress: 72.0 s, 504.5 tps, lat 16.323 ms stddev 40.631
progress: 73.0 s, 626.0 tps, lat 12.432 ms stddev 36.660
progress: 74.0 s, 651.5 tps, lat 12.668 ms stddev 38.253
progress: 75.0 s, 558.2 tps, lat 14.233 ms stddev 47.848
progress: 76.0 s, 782.7 tps, lat 9.748 ms stddev 29.023
progress: 77.0 s, 651.9 tps, lat 12.336 ms stddev 39.291
progress: 78.0 s, 676.6 tps, lat 12.379 ms stddev 47.952
progress: 79.0 s, 655.5 tps, lat 11.603 ms stddev 27.595
progress: 80.0 s, 871.9 tps, lat 9.741 ms stddev 27.310
progress: 81.0 s, 568.2 tps, lat 13.182 ms stddev 39.631
progress: 82.0 s, 536.7 tps, lat 15.208 ms stddev 36.181
progress: 83.0 s, 319.6 tps, lat 14.948 ms stddev 43.019
progress: 84.0 s, 44.1 tps, lat 20.903 ms stddev 38.322
progress: 85.0 s, 215.9 tps, lat 84.987 ms stddev 416.860
progress: 86.0 s, 518.8 tps, lat 15.042 ms stddev 40.980
progress: 87.0 s, 379.4 tps, lat 21.147 ms stddev 57.995
progress: 88.0 s, 436.0 tps, lat 18.794 ms stddev 49.905
progress: 89.0 s, 552.5 tps, lat 13.998 ms stddev 37.635
progress: 90.0 s, 702.1 tps, lat 11.677 ms stddev 31.936
progress: 91.0 s, 964.0 tps, lat 8.700 ms stddev 24.919
progress: 92.0 s, 723.4 tps, lat 10.076 ms stddev 26.589
progress: 93.0 s, 855.1 tps, lat 10.012 ms stddev 31.914
progress: 94.0 s, 726.9 tps, lat 10.630 ms stddev 32.436
progress: 95.0 s, 932.6 tps, lat 8.948 ms stddev 26.067
progress: 96.0 s, 678.0 tps, lat 11.326 ms stddev 34.342
progress: 97.0 s, 843.0 tps, lat 9.961 ms stddev 28.049
progress: 98.0 s, 784.1 tps, lat 9.579 ms stddev 30.409
progress: 99.0 s, 812.0 tps, lat 10.388 ms stddev 30.140
progress: 100.0 s, 891.4 tps, lat 8.900 ms stddev 25.921
transaction type: <builtin: TPC-B (sort of)>
scaling factor: 100
query mode: prepared
number of clients: 8
number of threads: 8
duration: 100 s
number of transactions actually processed: 41799
latency average = 19.157 ms
latency stddev = 89.105 ms
tps = 417.428278 (including connections establishing)
tps = 417.454607 (excluding connections establishing)

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

Re: [HACKERS] Horrible CREATE DATABASE Performance in High Sierra

От
Andres Freund
Дата:
Hi,

On 2017-10-02 18:33:17 -0400, Tom Lane wrote:
> Andres Freund <andres@anarazel.de> writes:
> > To demonstrate what I'm observing here, on linux with a fairly fast ssd:
> > ...
> 
> I tried to replicate this test as closely as I could on the Mac hardware
> I have laying about.

Thanks!

> I only bothered with the synchronous_commit=off case, though, since
> you say that shows the worst effects.

That's the case on linux at least, but I'd suspect it's a much more
general thing - you just can't get that much data dirty with pgbench
otherwise.


> I'm kind of surprised that machine B doesn't show obvious tanking in this
> test given that msync() makes it suck so badly at copying a database.
> I wonder what is different from the kernel's standpoint ... maybe the
> sheer number of different files mmap'd by a single process during the
> copy?

Yea, that's curious. I've really no clue about OSX, so pardon my
question: With HEAD and CREATE DATABASE, is it IO blocked or kernel cpu
blocked?


> If we could arrange to not use pg_flush_after in copydir.c on macOS,
> I'd be okay with leaving it alone for the configurable flush_after
> calls.  But I can't think of a way to do that that wouldn't be a
> complete kluge.  I don't much want to do
> 
> +#ifndef __darwin__
>         pg_flush_data(dstfd, offset, nbytes);
> +#endif
> 
> but I don't see any better alternative ...

What we could do is introduce a guc (~create_database_flush_data) that
controls whether we flush here, and have the defaults set differently on
OSX. Not sure if that's actually any better.

- Andres


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

Re: [HACKERS] Horrible CREATE DATABASE Performance in High Sierra

От
Tom Lane
Дата:
Andres Freund <andres@anarazel.de> writes:
> On 2017-10-02 18:33:17 -0400, Tom Lane wrote:
>> I'm kind of surprised that machine B doesn't show obvious tanking in this
>> test given that msync() makes it suck so badly at copying a database.
>> I wonder what is different from the kernel's standpoint ... maybe the
>> sheer number of different files mmap'd by a single process during the
>> copy?

> Yea, that's curious. I've really no clue about OSX, so pardon my
> question: With HEAD and CREATE DATABASE, is it IO blocked or kernel cpu
> blocked?

What I saw was that the backend process was consuming 100% of (one) CPU,
while the I/O transaction rate viewed by "iostat 1" started pretty low
--- under 10% of what the machine is capable of --- and dropped from
there as the copy proceeded.  I did not think to check if that was user
or kernel-space CPU, but I imagine it has to be the latter.
        regards, tom lane


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

Re: [HACKERS] Horrible CREATE DATABASE Performance in High Sierra

От
Andres Freund
Дата:
On 2017-10-02 19:50:51 -0400, Tom Lane wrote:
> Andres Freund <andres@anarazel.de> writes:
> > On 2017-10-02 18:33:17 -0400, Tom Lane wrote:
> >> I'm kind of surprised that machine B doesn't show obvious tanking in this
> >> test given that msync() makes it suck so badly at copying a database.
> >> I wonder what is different from the kernel's standpoint ... maybe the
> >> sheer number of different files mmap'd by a single process during the
> >> copy?
> 
> > Yea, that's curious. I've really no clue about OSX, so pardon my
> > question: With HEAD and CREATE DATABASE, is it IO blocked or kernel cpu
> > blocked?
> 
> What I saw was that the backend process was consuming 100% of (one) CPU,
> while the I/O transaction rate viewed by "iostat 1" started pretty low
> --- under 10% of what the machine is capable of --- and dropped from
> there as the copy proceeded.  I did not think to check if that was user
> or kernel-space CPU, but I imagine it has to be the latter.

So that's pretty clearly a kernel bug... Hm. I wonder if it's mmap() or
msync() that's the problem here. I guess you didn't run a profile?

One interesting thing here is that in the CREATE DATABASE case there'll
probably be a lot larger contiguous mappings than in *_flush_after
cases. So it might be related to the size of the mapping / flush "unit".

Greetings,

Andres Freund


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

Re: [HACKERS] Horrible CREATE DATABASE Performance in High Sierra

От
Tom Lane
Дата:
Andres Freund <andres@anarazel.de> writes:
> On 2017-10-02 19:50:51 -0400, Tom Lane wrote:
>> What I saw was that the backend process was consuming 100% of (one) CPU,
>> while the I/O transaction rate viewed by "iostat 1" started pretty low
>> --- under 10% of what the machine is capable of --- and dropped from
>> there as the copy proceeded.  I did not think to check if that was user
>> or kernel-space CPU, but I imagine it has to be the latter.

> So that's pretty clearly a kernel bug... Hm. I wonder if it's mmap() or
> msync() that's the problem here. I guess you didn't run a profile?

Interestingly, profiling with Activity Monitor seems to blame the problem
entirely on munmap() ... which squares with the place I hit every time
when randomly stopping the process with gdb^Hlldb, so I'm inclined to
believe it.

This still offers no insight as to why CREATE DATABASE is hitting the
problem while regular flush activity doesn't.

> One interesting thing here is that in the CREATE DATABASE case there'll
> probably be a lot larger contiguous mappings than in *_flush_after
> cases. So it might be related to the size of the mapping / flush "unit".

Meh, the mapping is only 64K in this case vs. 8K in the other.  Hard
to credit that it breaks that easily.
        regards, tom lane


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

Re: [HACKERS] Horrible CREATE DATABASE Performance in High Sierra

От
Brent Dearth
Дата:
Tom, Andres -

Is there an issue tracker I could be looking at to follow along on the progress on this issue? 

Thanks so much!

On Mon, Oct 2, 2017 at 9:06 PM, Tom Lane <tgl@sss.pgh.pa.us> wrote:
Andres Freund <andres@anarazel.de> writes:
> On 2017-10-02 19:50:51 -0400, Tom Lane wrote:
>> What I saw was that the backend process was consuming 100% of (one) CPU,
>> while the I/O transaction rate viewed by "iostat 1" started pretty low
>> --- under 10% of what the machine is capable of --- and dropped from
>> there as the copy proceeded.  I did not think to check if that was user
>> or kernel-space CPU, but I imagine it has to be the latter.

> So that's pretty clearly a kernel bug... Hm. I wonder if it's mmap() or
> msync() that's the problem here. I guess you didn't run a profile?

Interestingly, profiling with Activity Monitor seems to blame the problem
entirely on munmap() ... which squares with the place I hit every time
when randomly stopping the process with gdb^Hlldb, so I'm inclined to
believe it.

This still offers no insight as to why CREATE DATABASE is hitting the
problem while regular flush activity doesn't.

> One interesting thing here is that in the CREATE DATABASE case there'll
> probably be a lot larger contiguous mappings than in *_flush_after
> cases. So it might be related to the size of the mapping / flush "unit".

Meh, the mapping is only 64K in this case vs. 8K in the other.  Hard
to credit that it breaks that easily.

                        regards, tom lane

Re: [HACKERS] Horrible CREATE DATABASE Performance in High Sierra

От
Tom Lane
Дата:
Brent Dearth <brent.dearth@gmail.com> writes:
> Is there an issue tracker I could be looking at to follow along on the
> progress on this issue?

This email thread is pretty much it ...

Current status is that I've filed a bug report with Apple and am waiting
to see their response before deciding what to do next.  If they fix the
issue promptly then there's little need for us to do anything.

If you want to help move things along, it would be useful to run some
experiments and see if there's a way to ameliorate the problem short of
the brute-force answer of disabling copy_file()'s pg_flush_data() call.
One thing that occurs to me is that even a 64KB file copy buffer is pretty
small on any modern machine.  If we were to up that to, say, 1MB, does
that help any?  See COPY_BUF_SIZE in src/backend/storage/file/copydir.c.
        regards, tom lane


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

Re: [HACKERS] Horrible CREATE DATABASE Performance in High Sierra

От
Tom Lane
Дата:
I wrote:
> Current status is that I've filed a bug report with Apple and am waiting
> to see their response before deciding what to do next.  If they fix the
> issue promptly then there's little need for us to do anything.

Not having heard a peep from Apple yet, I decided to do a bit more
experimenting.  I found that indeed, issuing fewer and larger mmap/msync
requests helps enormously.  If you're willing to go as far as issuing
only one per file, the speed seems on par with non-fsync.  But that
requires being able to mmap 1GB-sized files, so it's probably not
something we want to do.

What I did instead was to adjust the logic in copy_file() so that the
unit of flush requests can be a multiple of the unit of read/write
requests.  (My original thought of just raising the buffer size seems
like not as good an idea; it's less cache-friendly for one thing.)

I find that on both Linux and macOS-with-HFS, requesting a flush only
every 1MB seems to be a win compared to flushing every 64KB as we
currently do.  Actually it seems that on macOS, every increment of
increase in the flush distance helps, but with HFS you hit diminishing
returns after 1MB or so.  With APFS you need a flush distance of 32MB
or more to have credible performance.

Accordingly I propose the attached patch.  If anyone's interested in
experimenting on other platforms, we might be able to refine/complicate
the FLUSH_DISTANCE selection further, but I think this is probably good
enough for a first cut.

            regards, tom lane

diff --git a/src/backend/storage/file/copydir.c b/src/backend/storage/file/copydir.c
index a5e074e..eae9f5a 100644
--- a/src/backend/storage/file/copydir.c
+++ b/src/backend/storage/file/copydir.c
@@ -139,10 +139,24 @@ copy_file(char *fromfile, char *tofile)
     int            dstfd;
     int            nbytes;
     off_t        offset;
+    off_t        flush_offset;

-    /* Use palloc to ensure we get a maxaligned buffer */
+    /* Size of copy buffer (read and write requests) */
 #define COPY_BUF_SIZE (8 * BLCKSZ)

+    /*
+     * Size of data flush requests.  It seems beneficial on most platforms to
+     * do this every 1MB or so.  But macOS, at least with early releases of
+     * APFS, is really unfriendly to small mmap/msync requests, so there do it
+     * only every 32MB.
+     */
+#if defined(__darwin__)
+#define FLUSH_DISTANCE (32 * 1024 * 1024)
+#else
+#define FLUSH_DISTANCE (1024 * 1024)
+#endif
+
+    /* Use palloc to ensure we get a maxaligned buffer */
     buffer = palloc(COPY_BUF_SIZE);

     /*
@@ -163,11 +177,23 @@ copy_file(char *fromfile, char *tofile)
     /*
      * Do the data copying.
      */
+    flush_offset = 0;
     for (offset = 0;; offset += nbytes)
     {
         /* If we got a cancel signal during the copy of the file, quit */
         CHECK_FOR_INTERRUPTS();

+        /*
+         * We fsync the files later, but during the copy, flush them every so
+         * often to avoid spamming the cache and hopefully get the kernel to
+         * start writing them out before the fsync comes.
+         */
+        if (offset - flush_offset >= FLUSH_DISTANCE)
+        {
+            pg_flush_data(dstfd, flush_offset, offset - flush_offset);
+            flush_offset = offset;
+        }
+
         pgstat_report_wait_start(WAIT_EVENT_COPY_FILE_READ);
         nbytes = read(srcfd, buffer, COPY_BUF_SIZE);
         pgstat_report_wait_end();
@@ -190,15 +216,11 @@ copy_file(char *fromfile, char *tofile)
                      errmsg("could not write to file \"%s\": %m", tofile)));
         }
         pgstat_report_wait_end();
-
-        /*
-         * We fsync the files later but first flush them to avoid spamming the
-         * cache and hopefully get the kernel to start writing them out before
-         * the fsync comes.
-         */
-        pg_flush_data(dstfd, offset, nbytes);
     }

+    if (offset > flush_offset)
+        pg_flush_data(dstfd, flush_offset, offset - flush_offset);
+
     if (CloseTransientFile(dstfd))
         ereport(ERROR,
                 (errcode_for_file_access(),

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

Re: [HACKERS] Horrible CREATE DATABASE Performance in High Sierra

От
Peter Eisentraut
Дата:
On 10/7/17 16:46, Tom Lane wrote:
> I wrote:
>> Current status is that I've filed a bug report with Apple and am waiting
>> to see their response before deciding what to do next.  If they fix the
>> issue promptly then there's little need for us to do anything.

> Accordingly I propose the attached patch.  If anyone's interested in
> experimenting on other platforms, we might be able to refine/complicate
> the FLUSH_DISTANCE selection further, but I think this is probably good
> enough for a first cut.

What is the status of this?  Is performance on High Sierra still bad?

-- 
Peter Eisentraut              http://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

Re: [HACKERS] Horrible CREATE DATABASE Performance in High Sierra

От
Andres Freund
Дата:

On November 8, 2017 7:31:17 AM PST, Peter Eisentraut <peter.eisentraut@2ndquadrant.com> wrote:
>On 10/7/17 16:46, Tom Lane wrote:
>> I wrote:
>>> Current status is that I've filed a bug report with Apple and am
>waiting
>>> to see their response before deciding what to do next.  If they fix
>the
>>> issue promptly then there's little need for us to do anything.
>
>> Accordingly I propose the attached patch.  If anyone's interested in
>> experimenting on other platforms, we might be able to
>refine/complicate
>> the FLUSH_DISTANCE selection further, but I think this is probably
>good
>> enough for a first cut.
>
>What is the status of this?  Is performance on High Sierra still bad?

Didn't Tom commit a workaround?

Andres
--
Sent from my Android device with K-9 Mail. Please excuse my brevity.


--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

Re: [HACKERS] Horrible CREATE DATABASE Performance in High Sierra

От
Tom Lane
Дата:
Peter Eisentraut <peter.eisentraut@2ndquadrant.com> writes:
> On 10/7/17 16:46, Tom Lane wrote:
>> Accordingly I propose the attached patch.  If anyone's interested in
>> experimenting on other platforms, we might be able to refine/complicate
>> the FLUSH_DISTANCE selection further, but I think this is probably good
>> enough for a first cut.

> What is the status of this?  Is performance on High Sierra still bad?

I committed the fix at 643c27e36.  If Apple have done anything about the
underlying problem, you couldn't tell it from their non-response to my
bug report.
        regards, tom lane


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

Re: [HACKERS] Horrible CREATE DATABASE Performance in High Sierra

От
Tom Lane
Дата:
I wrote:
> Peter Eisentraut <peter.eisentraut@2ndquadrant.com> writes:
>> What is the status of this?  Is performance on High Sierra still bad?

> I committed the fix at 643c27e36.  If Apple have done anything about the
> underlying problem, you couldn't tell it from their non-response to my
> bug report.

So, after just about one year of radio silence, I received email from
Apple saying that (a) they'd closed my bug report as a duplicate of
another one, and (b) that other one was also closed, and (c) not one
other iota of information.

This seems to be standard practice for them, though I'm at a loss
to say why they consider it even minimally acceptable.  From here,
it seems like a great way to piss people off and ensure they won't
bother filing any future bug reports.

Anyway, reading the tea leaves and considering the timing, one might
guess that they actually fixed the problem as of macOS Mojave.

            regards, tom lane


Re: [HACKERS] Horrible CREATE DATABASE Performance in High Sierra

От
Gavin Flower
Дата:
On 19/09/2018 16:38, Tom Lane wrote:
> I wrote:
>> Peter Eisentraut <peter.eisentraut@2ndquadrant.com> writes:
>>> What is the status of this?  Is performance on High Sierra still bad?
>> I committed the fix at 643c27e36.  If Apple have done anything about the
>> underlying problem, you couldn't tell it from their non-response to my
>> bug report.
> So, after just about one year of radio silence, I received email from
> Apple saying that (a) they'd closed my bug report as a duplicate of
> another one, and (b) that other one was also closed, and (c) not one
> other iota of information.
>
> This seems to be standard practice for them, though I'm at a loss
> to say why they consider it even minimally acceptable.  From here,
> it seems like a great way to piss people off and ensure they won't
> bother filing any future bug reports.
>
> Anyway, reading the tea leaves and considering the timing, one might
> guess that they actually fixed the problem as of macOS Mojave.
>
>             regards, tom lane
>
Don't worry your tiny little mind, Apple knows best what is good for you!


[Smileys omitted, due to budget constraints]