On Thu, Jul 16, 2015 at 10:54 AM, Tom Lane <tgl@sss.pgh.pa.us> wrote:
> Ildus Kurbangaliev <i.kurbangaliev@postgrespro.ru> writes:
>> I made benchmark of gettimeofday(). I believe it is certainly usable for monitoring.
>> Testing configuration:
>> 24 cores, Intel Xeon CPU X5675@3.07Ghz
>> RAM 24 GB
>
>> 54179703 - microseconds total
>> 2147483647 - (INT_MAX), the number of gettimeofday() calls
>
>> >>> 54179703 / 2147483647.0
>> 0.025229390256679331
>
>> Here we have the average duration of one gettimeofday in microseconds.
>
> 25 nsec per gettimeofday() is in the same ballpark as what I measured
> on a new-ish machine last year:
> http://www.postgresql.org/message-id/flat/31856.1400021891@sss.pgh.pa.us
>
> The problem is that (a) on modern hardware that is not a small number,
> it's the equivalent of 100 or more instructions; and (b) the results
> look very much worse on less-modern hardware, particularly machines
> where gettimeofday requires a kernel call.
Yes, we've been through this many times before. All you have to do is
look at how much slower a query gets when you run EXPLAIN ANALYZE vs.
when you run it without EXPLAIN ANALYZE. The slowdown there is
platform-dependent, but I think it's significant even on platforms
where gettimeofday is fast, like modern Linux machines. That overhead
is precisely the reason why we added EXPLAIN (ANALYZE, TIMING OFF) -
so that if you want to, you can see the row-count estimates without
incurring the timing overhead. There is *plenty* of evidence that
using gettimeofday in contexts where it may be called many times per
query measurably hurts performance.
It is possible that we can have an *optional feature* where timing can
be turned on, but it is dead certain that turning it on
unconditionally will be unacceptable.
--
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company