Re: Reduce timing overhead of EXPLAIN ANALYZE using rdtsc?

Поиск
Список
Период
Сортировка
Hi,

On 2023-01-04 13:02:05 +0100, David Geier wrote:
> From be18633d4735f680c7910fcb4e8ac90c4eada131 Mon Sep 17 00:00:00 2001
> From: David Geier <geidav.pg@gmail.com>
> Date: Thu, 17 Nov 2022 10:22:01 +0100
> Subject: [PATCH 1/3] Change instr_time to just store nanoseconds, that's
>  cheaper.

Does anybody see a reason to not move forward with this aspect? We do a fair
amount of INSTR_TIME_ACCUM_DIFF() etc, and that gets a good bit cheaper by
just using nanoseconds. We'd also save memory in BufferUsage (144-122 bytes),
Instrumentation (16 bytes saved in Instrumentation itself, 32 via
BufferUsage).

While the range of instr_time storing nanoseconds wouldn't be good enough for
a generic timestamp facility (hence using microsecs for Timestamp), the range
seems plenty for its use of measuring runtime:

(2 ** 63) - 1) / ((10 ** 9) * 60 * 60 * 24 * 365) = ~292 years

Of course, when using CLOCK_REALTIME, this is relative to 1970-01-01, so just
239 years.

It could theoretically be a different story, if we stored instr_time's on
disk. But we don't, they're ephemeral.


This doesn't buy a whole lot of performance - the bottlenck is the actual
timestamp computation. But in a query with not much else going on, it's
visible and reproducible. It's, unsurprisingly, a lot easier to see when using
BUFFERS.

For both timespec and nanosecond, I measured three server starts, and for each
started server three executions of
pgbench -n -Mprepared -c1 -P5 -T15 -f <(echo "EXPLAIN (ANALYZE, BUFFERS) SELECT generate_series(1, 10000000) OFFSET
10000000;")

the best result is:
timespec: 1073.431
nanosec:  957.532
a ~10% difference

Greetings,

Andres Freund



В списке pgsql-hackers по дате отправления:

Предыдущее
От: Justin Pryzby
Дата:
Сообщение: Re: properly sizing attnums in pg_get_publication_tables
Следующее
От: Jeff Davis
Дата:
Сообщение: Re: allow granting CLUSTER, REFRESH MATERIALIZED VIEW, and REINDEX