Re: index prefetching

From: Andres Freund <andres(at)anarazel(dot)de>
To: Peter Geoghegan <pg(at)bowt(dot)ie>
Cc: Tomas Vondra <tomas(at)vondra(dot)me>, Thomas Munro <thomas(dot)munro(at)gmail(dot)com>, Nazir Bilal Yavuz <byavuz81(at)gmail(dot)com>, Robert Haas <robertmhaas(at)gmail(dot)com>, Melanie Plageman <melanieplageman(at)gmail(dot)com>, PostgreSQL Hackers <pgsql-hackers(at)lists(dot)postgresql(dot)org>, Georgios <gkokolatos(at)protonmail(dot)com>, Konstantin Knizhnik <knizhnik(at)garret(dot)ru>, Dilip Kumar <dilipbalaut(at)gmail(dot)com>
Subject: Re: index prefetching
Date: 2025-08-14 18:53:42
Message-ID: 6wyxbnry2unm3kbcu2sabhzhs7baoedlg77xqm42chpofjq45g@igst42zpl7ok
Views: Whole Thread | Raw Message | Download mbox | Resend email
Thread:
Lists: pgsql-hackers

Hi,

On 2025-08-14 14:44:44 -0400, Peter Geoghegan wrote:
> On Thu Aug 14, 2025 at 1:57 PM EDT, Peter Geoghegan wrote:
> > The only interesting thing about the flame graph is just how little
> > difference there seems to be (at least for this particular perf event
> > type).
>
> I captured method_io_uring.c DEBUG output from running each query in the
> server log, in the hope that it would shed some light on what's really going
> on here. I think that it just might.
>
> I count a total of 12,401 distinct sleeps for the sequential/slow backwards
> scan test case:
>
> $ grep -E "wait_one with [1-9][0-9]* sleeps" sequential.txt | head
> 2025-08-14 14:35:03.278 EDT [2516983][client backend] [[unknown]][0/1:0] DEBUG: 00000: wait_one with 1 sleeps
> 2025-08-14 14:35:03.278 EDT [2516983][client backend] [[unknown]][0/1:0] DEBUG: 00000: wait_one with 1 sleeps
> 2025-08-14 14:35:03.278 EDT [2516983][client backend] [[unknown]][0/1:0] DEBUG: 00000: wait_one with 1 sleeps
> 2025-08-14 14:35:03.278 EDT [2516983][client backend] [[unknown]][0/1:0] DEBUG: 00000: wait_one with 1 sleeps
> 2025-08-14 14:35:03.278 EDT [2516983][client backend] [[unknown]][0/1:0] DEBUG: 00000: wait_one with 1 sleeps
> 2025-08-14 14:35:03.278 EDT [2516983][client backend] [[unknown]][0/1:0] DEBUG: 00000: wait_one with 1 sleeps
> 2025-08-14 14:35:03.279 EDT [2516983][client backend] [[unknown]][0/1:0] DEBUG: 00000: wait_one with 1 sleeps
> 2025-08-14 14:35:03.279 EDT [2516983][client backend] [[unknown]][0/1:0] DEBUG: 00000: wait_one with 1 sleeps
> 2025-08-14 14:35:03.279 EDT [2516983][client backend] [[unknown]][0/1:0] DEBUG: 00000: wait_one with 1 sleeps
> 2025-08-14 14:35:03.279 EDT [2516983][client backend] [[unknown]][0/1:0] DEBUG: 00000: wait_one with 1 sleeps
> $ grep -E "wait_one with [1-9][0-9]* sleeps" sequential.txt | awk '{ total += $11 } END { print total }'
> 12401
>
> But there are only 3 such sleeps seen when the random backwards scan query is
> run -- which might begin to explain the mystery of why it runs so much faster:
>
> $ grep -E "wait_one with [1-9][0-9]* sleeps" random.txt | awk '{ total += $11 } END { print total }'
> 104

I think this is just an indicator of being IO bound. That message is output
whenever we have to wait for IO to finish. So if one workload prints that a
12k times and another 104 times, that's because the latter didn't have to wait
for IO to complete, because it already had completed by the time we needed the
IO to have finished to continue.

Factors potentially leading to slower IO:

- sometimes random IO *can* be faster for SSDs, because it allows different
flash chips to work concurrently, rather than being bound by the speed of
one one flash chip

- it's possible that with your SSD the sequential IO leads to more IO
combining. Larger IOs always have a higher latency than smaller IOs - but
obviously fewer IOs are needed. The increased latency may be bad enough for
your access pattern to trigger more waits.

It's *not* necessarily enough to just lower io_combine_limit, the OS also
can do combining.

I'd see what changes if you temporarily reduce
/sys/block/nvme6n1/queue/max_sectors_kb to a smaller size.

Could you show iostat for both cases?

Greetings,

Andres Freund

In response to

Responses

Browse pgsql-hackers by date

  From Date Subject
Next Message Peter Geoghegan 2025-08-14 19:15:02 Re: index prefetching
Previous Message Peter Geoghegan 2025-08-14 18:44:44 Re: index prefetching