From: | Peter Geoghegan <pg(at)bowt(dot)ie> |
---|---|
To: | Andres Freund <andres(at)anarazel(dot)de> |
Cc: | Tomas Vondra <tomas(at)vondra(dot)me>, Thomas Munro <thomas(dot)munro(at)gmail(dot)com>, Nazir Bilal Yavuz <byavuz81(at)gmail(dot)com>, Robert Haas <robertmhaas(at)gmail(dot)com>, Melanie Plageman <melanieplageman(at)gmail(dot)com>, PostgreSQL Hackers <pgsql-hackers(at)lists(dot)postgresql(dot)org>, Georgios <gkokolatos(at)protonmail(dot)com>, Konstantin Knizhnik <knizhnik(at)garret(dot)ru>, Dilip Kumar <dilipbalaut(at)gmail(dot)com> |
Subject: | Re: index prefetching |
Date: | 2025-08-15 18:05:06 |
Message-ID: | CAH2-WzkwBsxitePiwCjNtCeMfJuSXvU1h2nmwL0AtE61SaOT3w@mail.gmail.com |
Views: | Whole Thread | Raw Message | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-hackers |
On Fri, Aug 15, 2025 at 1:23 PM Andres Freund <andres(at)anarazel(dot)de> wrote:
> Somewhat random note about I/O waits:
>
> Unfortunately the I/O wait time we measure often massively *over* estimate the
> actual I/O time. If I execute the above query with the patch applied, we
> actually barely ever wait for I/O to complete, it's all completed by the time
> we have to wait for the I/O. What we are measuring is the CPU cost of
> *initiating* the I/O.
I do get that.
This was really obvious when I temporarily switched the prefetch patch
over from using READ_STREAM_DEFAULT to using READ_STREAM_USE_BATCHING
(this is probably buggy, but still seems likely to be representative
of what's possible with some care). I noticed that that change reduced
the reported "shared read" time by 10x -- which had exactly zero impact on
query execution time (at least for the queries I looked at). Since, as
you say, the backend didn't have to wait for I/O to complete either
way.
--
Peter Geoghegan
From | Date | Subject | |
---|---|---|---|
Next Message | Masahiko Sawada | 2025-08-15 18:22:06 | Re: memory leak in logical WAL sender with pgoutput's cachectx |
Previous Message | Peter Geoghegan | 2025-08-15 17:58:19 | Re: index prefetching |