Re: Timing overhead and Linux clock sources

Поиск
Список
Период
Сортировка
От Ants Aasma
Тема Re: Timing overhead and Linux clock sources
Дата
Msg-id CA+CSw_ueMfK1rx3HaH7WS1dJH=BnJyNr8b-ESa5tMd+4gAK1vw@mail.gmail.com
обсуждение исходный текст
Ответ на Re: Timing overhead and Linux clock sources  (Greg Smith <greg@2ndQuadrant.com>)
Ответы Re: Timing overhead and Linux clock sources  (Greg Smith <greg@2ndQuadrant.com>)
Список pgsql-hackers
On Wed, Dec 7, 2011 at 9:40 AM, Greg Smith <greg@2ndquadrant.com> wrote:
>  He estimated 22ns per gettimeofday on the system with fast timing
> calls--presumably using TSC, and possibly faster than I saw because his
> system had less cores than mine to worry about.  He got 990 ns on his slower
> system, and a worst case there of 3% overhead.

Roberts comment about sequential scan performing lots of reads in a tight loop
made me think of worse worst case. A count(*) with wide rows and/or lots of
bloat. I created a test table with one tuple per page like this:
CREATE TABLE io_test WITH (fillfactor=10) AS
    SELECT repeat('x', 1000) FROM generate_series(1,30000);
I then timed SELECT COUNT(*) FROM io_test; with track_iotiming on and
off. Averages of 1000 executions, differences significant according to t-test:
timer | iotiming=off |  iotiming=on | diff
 hpet |     86.10 ms |    147.80 ms | 71.67%
  tsc |     85.86 ms |     87.66 ms |  2.10%

The attached test program (test_gettimeofday_monotonic) shows that one
test loop iteration takes 34ns with tsc and 1270ns with hpet.

I also managed to run the test program a couple of two socket Solaris 10
machines. The one with Xeon X5570 had iteration time of 220ns and Xeon
E5620 had 270ns iterations. I'm not sure yet whether this is due to Solaris
gettimeofday just being slower or some hardware issue.

I also tested a more reasonable case of count(*) on pgbench_accounts with
scale factor 50 (61 tuples per page). With tsc timing was actually 1% faster,
but not statistically significant, with hpet the overhead was 5.6%.

Scaling the overhead for the Solaris machines, it seems that the worst case
for timing all buffer reads would be about 20% slower. Count(*) on medium
length tuples for an out of shared buffers, in OS cache would have overhead
between 1 and 2%.

>> One random thought: I wonder if there's a way for us to just time
>> every N'th event or something like that, to keep the overhead low.

This would work only for cases where there's a reasonably uniform distribution
of times or really long sampling periods, otherwise the variability will make
the result pretty much useless. For example in the I/O case, a pretty typical
load can have 1% of timings be 3 orders of magnitude longer than median.

--
Ants Aasma

Вложения

В списке pgsql-hackers по дате отправления:

Предыдущее
От: Bruce Momjian
Дата:
Сообщение: Re: static or dynamic libpgport
Следующее
От: Greg Smith
Дата:
Сообщение: Re: Timing overhead and Linux clock sources