| From: | David Geier <geidav(dot)pg(at)gmail(dot)com> |
|---|---|
| To: | Hannu Krosing <hannuk(at)google(dot)com>, Robert Haas <robertmhaas(at)gmail(dot)com> |
| Cc: | Lukas Fittl <lukas(at)fittl(dot)com>, Andres Freund <andres(at)anarazel(dot)de>, Pavel Stehule <pavel(dot)stehule(at)gmail(dot)com>, vignesh C <vignesh21(at)gmail(dot)com>, Michael Paquier <michael(at)paquier(dot)xyz>, Ibrar Ahmed <ibrar(dot)ahmad(at)gmail(dot)com>, Maciek Sakrejda <m(dot)sakrejda(at)gmail(dot)com>, pgsql-hackers <pgsql-hackers(at)postgresql(dot)org> |
| Subject: | Re: Reduce timing overhead of EXPLAIN ANALYZE using rdtsc? |
| Date: | 2026-01-11 19:26:17 |
| Message-ID: | 3bdce15d-25ec-4c49-9906-818803462897@gmail.com |
| Views: | Whole Thread | Raw Message | Download mbox | Resend email |
| Thread: | |
| Lists: | pgsql-hackers |
> Based on Robert's suggestion I wanted to add a "fast_clock_source" enum
> GUC which can have the following values "auto", "rdtsc", "try_rdtsc" and
> "off". With that, at least no additional checks are needed and
> performance will remain as previously benchmarked in this thread.
The attached patch set is rebased on latest master and contains a commit
which adds a "fast_clock_source" GUC that can be "try", "off" and
"rdtsc" on Linux.
Alternatively, we could call the GUC "clock_source" with "auto",
"clock_gettime" and "rdtsc". Opinions?
I moved the call to INSTR_TIME_INITIALIZE() from InitPostgres() to
PostmasterMain(). In InitPostgres() it kept the database in a recovery
cycle.
> I'll still add unlikely() around the if (has_rdtsc).
Done.
--
David Geier
| Attachment | Content-Type | Size |
|---|---|---|
| v3-0003-Add-GUC.patch | text/x-patch | 8.5 KB |
| v3-0002-pg_test_timing-Also-test-fast-timing-and-report-t.patch | text/x-patch | 7.7 KB |
| v3-0001-Use-time-stamp-counter-to-measure-time-on-Linux-x.patch | text/x-patch | 17.2 KB |
| From | Date | Subject | |
|---|---|---|---|
| Next Message | Lukas Fittl | 2026-01-11 20:20:32 | Re: pg_plan_advice |
| Previous Message | Andrey Borodin | 2026-01-11 17:54:10 | Re: Compression of bigger WAL records |