From: | Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us> |
---|---|
To: | Hannu Krosing <hannuk(at)google(dot)com> |
Cc: | "Andrey M(dot) Borodin" <x4mmm(at)yandex-team(dot)ru>, Peter Eisentraut <peter(at)eisentraut(dot)org>, pgsql-hackers <pgsql-hackers(at)postgresql(dot)org> |
Subject: | Re: What is a typical precision of gettimeofday()? |
Date: | 2025-07-08 20:01:25 |
Message-ID: | 903751.1752004885@sss.pgh.pa.us |
Views: | Whole Thread | Raw Message | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-hackers |
Hannu Krosing <hannuk(at)google(dot)com> writes:
> On Tue, Jul 8, 2025 at 8:07 PM Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us> wrote:
>> Even more interesting is what I got from an ancient PPC Macbook
>> (mamba's host, running NetBSD):
>>
>> Testing timing overhead for 3 seconds.
>> Per loop time including overhead: 731.26 ns
>> ...
>> Observed timing durations up to 99.9900%:
>> ns % of total running % count
>> 705 39.9162 39.9162 1637570
>> 706 17.6040 57.5203 722208
>> 759 18.6797 76.2000 766337
>> 760 23.7851 99.9851 975787
>> 813 0.0002 99.9853 9
>> 814 0.0004 99.9857 17
>> 868 0.0001 99.9858 4
>> 922 0.0001 99.9859 3
> Do we have a fencepost error in the limit code so that it stops before
> printing out the 99.9900% limit row ?
No, I think what's happening there is that we get to NUM_DIRECT before
reaching the 99.99% mark. Running the test a bit longer, I do get a
hit at the next plausible 50ns step:
$ ./pg_test_timing -d 10
Testing timing overhead for 10 seconds.
Per loop time including overhead: 729.79 ns
Histogram of timing durations:
<= ns % of total running % count
0 0.0000 0.0000 0
1 0.0000 0.0000 0
3 0.0000 0.0000 0
7 0.0000 0.0000 0
15 0.0000 0.0000 0
31 0.0000 0.0000 0
63 0.0000 0.0000 0
127 0.0000 0.0000 0
255 0.0000 0.0000 0
511 0.0000 0.0000 0
1023 99.9879 99.9879 13700887
2047 0.0000 99.9880 2
4095 0.0063 99.9942 859
8191 0.0019 99.9962 267
16383 0.0017 99.9978 227
32767 0.0012 99.9990 166
65535 0.0001 99.9992 16
131071 0.0007 99.9998 90
262143 0.0000 99.9998 5
524287 0.0001 99.9999 11
1048575 0.0001 100.0000 10
Observed timing durations up to 99.9900%:
ns % of total running % count
705 40.7623 40.7623 5585475
706 17.9732 58.7355 2462787
759 18.1392 76.8747 2485525
760 23.1129 99.9876 3167060
813 0.0000 99.9877 5
814 0.0002 99.9878 23
868 0.0000 99.9879 5
869 0.0000 99.9879 1
922 0.0000 99.9879 3
923 0.0000 99.9879 2
976 0.0000 99.9879 1
...
625444 0.0000 100.0000 1
amd the next step after that would be 1026 ns which is past
the NUM_DIRECT array size.
I considered raising NUM_DIRECT some more, but I think it'd be
overkill. This machine is surely an order of magnitude slower
than anything anyone would consider of practical interest today.
Just for fun, though, I tried a run with NUM_DIRECT = 10240:
$ ./pg_test_timing -d 10
Testing timing overhead for 10 seconds.
Per loop time including overhead: 729.23 ns
Histogram of timing durations:
<= ns % of total running % count
0 0.0000 0.0000 0
1 0.0000 0.0000 0
3 0.0000 0.0000 0
7 0.0000 0.0000 0
15 0.0000 0.0000 0
31 0.0000 0.0000 0
63 0.0000 0.0000 0
127 0.0000 0.0000 0
255 0.0000 0.0000 0
511 0.0000 0.0000 0
1023 99.9878 99.9878 13711494
2047 0.0000 99.9878 5
4095 0.0062 99.9941 854
8191 0.0021 99.9962 289
16383 0.0017 99.9979 236
32767 0.0011 99.9990 153
65535 0.0002 99.9992 24
131071 0.0006 99.9998 85
262143 0.0001 99.9999 8
524287 0.0001 99.9999 9
1048575 0.0001 100.0000 7
2097151 0.0000 100.0000 0
4194303 0.0000 100.0000 0
8388607 0.0000 100.0000 0
16777215 0.0000 100.0000 0
33554431 0.0000 100.0000 1
67108863 0.0000 100.0000 1
Observed timing durations up to 99.9900%:
ns % of total running % count
705 50.3534 50.3534 6905051
706 22.1988 72.5522 3044153
759 12.0613 84.6135 1653990
760 15.3732 99.9867 2108150
813 0.0000 99.9867 2
814 0.0002 99.9869 27
868 0.0006 99.9875 85
869 0.0000 99.9876 2
922 0.0001 99.9877 20
923 0.0001 99.9878 9
976 0.0000 99.9878 2
977 0.0000 99.9878 3
1031 0.0000 99.9878 4
1248 0.0000 99.9878 1
2550 0.0002 99.9880 26
2604 0.0008 99.9889 114
2605 0.0002 99.9891 30
2658 0.0005 99.9896 75
2659 0.0004 99.9901 61
...
65362171 0.0000 100.0000 1
This is probably showing something interesting about the
behavior of NetBSD's scheduler, but I dunno what exactly.
regards, tom lane
From | Date | Subject | |
---|---|---|---|
Next Message | Andrew Dunstan | 2025-07-08 20:10:47 | Re: C11 / VS 2019 |
Previous Message | Tom Lane | 2025-07-08 19:45:13 | Re: C11 / VS 2019 |