Re: Timing overhead and Linux clock sources

From: Ants Aasma <ants(dot)aasma(at)eesti(dot)ee>
To: Greg Smith <greg(at)2ndquadrant(dot)com>
Cc: Robert Haas <robertmhaas(at)gmail(dot)com>, PostgreSQL-development <pgsql-hackers(at)postgresql(dot)org>
Subject: Re: Timing overhead and Linux clock sources
Date: 2011-12-09 23:48:45
Message-ID: CA+CSw_ueMfK1rx3HaH7WS1dJH=BnJyNr8b-ESa5tMd+4gAK1vw@mail.gmail.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-hackers

On Wed, Dec 7, 2011 at 9:40 AM, Greg Smith <greg(at)2ndquadrant(dot)com> wrote:
>  He estimated 22ns per gettimeofday on the system with fast timing
> calls--presumably using TSC, and possibly faster than I saw because his
> system had less cores than mine to worry about.  He got 990 ns on his slower
> system, and a worst case there of 3% overhead.

Roberts comment about sequential scan performing lots of reads in a tight loop
made me think of worse worst case. A count(*) with wide rows and/or lots of
bloat. I created a test table with one tuple per page like this:
CREATE TABLE io_test WITH (fillfactor=10) AS
SELECT repeat('x', 1000) FROM generate_series(1,30000);
I then timed SELECT COUNT(*) FROM io_test; with track_iotiming on and
off. Averages of 1000 executions, differences significant according to t-test:
timer | iotiming=off | iotiming=on | diff
hpet | 86.10 ms | 147.80 ms | 71.67%
tsc | 85.86 ms | 87.66 ms | 2.10%

The attached test program (test_gettimeofday_monotonic) shows that one
test loop iteration takes 34ns with tsc and 1270ns with hpet.

I also managed to run the test program a couple of two socket Solaris 10
machines. The one with Xeon X5570 had iteration time of 220ns and Xeon
E5620 had 270ns iterations. I'm not sure yet whether this is due to Solaris
gettimeofday just being slower or some hardware issue.

I also tested a more reasonable case of count(*) on pgbench_accounts with
scale factor 50 (61 tuples per page). With tsc timing was actually 1% faster,
but not statistically significant, with hpet the overhead was 5.6%.

Scaling the overhead for the Solaris machines, it seems that the worst case
for timing all buffer reads would be about 20% slower. Count(*) on medium
length tuples for an out of shared buffers, in OS cache would have overhead
between 1 and 2%.

>> One random thought: I wonder if there's a way for us to just time
>> every N'th event or something like that, to keep the overhead low.

This would work only for cases where there's a reasonably uniform distribution
of times or really long sampling periods, otherwise the variability will make
the result pretty much useless. For example in the I/O case, a pretty typical
load can have 1% of timings be 3 orders of magnitude longer than median.

--
Ants Aasma

Attachment Content-Type Size
test_gettimeofday_monotonic.c text/x-csrc 1.6 KB

In response to

Responses

Browse pgsql-hackers by date

  From Date Subject
Next Message Greg Smith 2011-12-10 01:26:12 Re: Timing overhead and Linux clock sources
Previous Message Bruce Momjian 2011-12-09 23:27:13 Re: static or dynamic libpgport