RE: track_io_timing default setting

Поиск
Список
Период
Сортировка
От Jakub Wartak
Тема RE: track_io_timing default setting
Дата
Msg-id AM8PR07MB8248888818DD10B90895EC74F6719@AM8PR07MB8248.eurprd07.prod.outlook.com
обсуждение исходный текст
Ответ на track_io_timing default setting  (Jeff Janes <jeff.janes@gmail.com>)
Список pgsql-hackers
> Can we change the default setting of track_io_timing to on?

+1 for better observability by default.

> I can't imagine a lot of people who care much about its performance impact will be running the latest version of
PostgreSQLon ancient/weird systems that have slow clock access. (And the few who do can just turn it off for their
system).
> For systems with fast user-space clock access, I've never seen this setting being turned on make a noticeable dent in
performance. Maybe I just never tested enough in the most adverse scenario (which I guess would be a huge FS cache, a
smallshared buffers, and a high CPU count with constant churning of pages that hit the FS cache but miss shared
buffers--nota system I have handy to do a lot of tests with.) 

Coincidently I have some quick notes for measuring the impact of changing the "clocksource"  on the Linux 5.10.x (real
syscallvs vd.so optimization) on PgSQL 13.x as input to the discussion. The thing is that the slow "xen" implementation
(atleast on AWS i3, Amazon Linux 2) is default because apparently time with faster TSC/ RDTSC ones can potentially
driftbackwards e.g. during potential(?) VM live migration. I haven't seen better way to see what happens under the hood
thanstrace and/or measuring huge no of calls. This only shows of course the impact to the whole PgSQL (with
track_io_timing=on),not just impact between track_io_timing=on vs off. IMHO better knowledge (in explain analyze,
autovacuum)is worth more than this potential degradation when using slow clocksources. 

With /sys/bus/clocksource/devices/clocksource0/current_clocksource=xen  (default on most AWS instances; ins.pgb =
simpleinsert to table with PK only from sequencer.): 
# time ./testclock # 10e9 calls of gettimeofday()
real    0m58.999s
user    0m35.796s
sys     0m23.204s

//pgbench
    transaction type: ins.pgb
    scaling factor: 1
    query mode: simple
    number of clients: 8
    number of threads: 2
    duration: 100 s
    number of transactions actually processed: 5511485
    latency average = 0.137 ms
    latency stddev = 0.034 ms
    tps = 55114.743913 (including connections establishing)
    tps = 55115.999449 (excluding connections establishing)

With /sys/bus/clocksource/devices/clocksource0/current_clocksource=tsc :
# time ./testclock # 10e9 calls of gettimeofday()
real    0m2.415s
user    0m2.415s
sys     0m0.000s # XXX: notice, userland only workload, no %sys part

//pgbench:
    transaction type: ins.pgb
    scaling factor: 1
    query mode: simple
    number of clients: 8
    number of threads: 2
    duration: 100 s
    number of transactions actually processed: 6190406
    latency average = 0.123 ms
    latency stddev = 0.035 ms
    tps = 61903.863938 (including connections establishing)
    tps = 61905.261175 (excluding connections establishing)

In addition what could be done here - if that XXX note holds true on more platforms - is to measure via rusage() many
gettimeofdays()during startup and log warning to consider checking OS clock implementation if it takes relatively too
longand/or %sys part is > 0. I dunno what to suggest for the potential time going backwards , but changing
track_io_timings=ondoesn't feel like it is going to make stuff crash., so again I think it is good idea.  

-Jakub Wartak.



В списке pgsql-hackers по дате отправления:

Предыдущее
От: Peter Eisentraut
Дата:
Сообщение: Re: port conflicts when running tests concurrently on windows.
Следующее
От: Amit Kapila
Дата:
Сообщение: Re: parallel vacuum comments