From: | Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us> |
---|---|
To: | Jianghua Yang <yjhjstz(at)gmail(dot)com> |
Cc: | Andres Freund <andres(at)anarazel(dot)de>, pgsql-hackers(at)lists(dot)postgresql(dot)org |
Subject: | Re: Use CLOCK_MONOTONIC_COARSE for instr_time when available |
Date: | 2025-07-16 22:24:33 |
Message-ID: | 1298296.1752704673@sss.pgh.pa.us |
Views: | Whole Thread | Raw Message | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-hackers |
... BTW, another resource worth looking at is src/bin/pg_test_timing/
which we just improved a few days ago [1]. What I see on two different
Linux-on-Intel boxes is that the loop time that that reports is 16 ns
and change, and the clock readings appear accurate to full nanosecond
precision. Changing instr_time.h to use CLOCK_MONOTONIC_COARSE, the
loop time drops to a bit over 5 ns, which would certainly be a nice
win if it were cost-free. But the clock precision degrades to 1 ms.
It is really hard to believe that giving up a factor of a million
in clock precision is going to be an acceptable tradeoff for saving
~10 ns per clock reading. Maybe with a lot of fancy statistical
arm-waving, and an assumption that people always look at averages
over long query runs, you could make a case that this change isn't
going to result in a disaster. But EXPLAIN's results are surely
going to become garbage-in-garbage-out for any query that doesn't
run for (at least) hundreds of milliseconds.
regards, tom lane
From | Date | Subject | |
---|---|---|---|
Next Message | Jacob Champion | 2025-07-16 22:25:14 | Re: libpq: Process buffered SSL read bytes to support records >8kB on async API |
Previous Message | Andres Freund | 2025-07-16 22:18:39 | Re: index prefetching |