From: | Tomas Vondra <tomas(at)vondra(dot)me> |
---|---|
To: | Peter Geoghegan <pg(at)bowt(dot)ie>, Andres Freund <andres(at)anarazel(dot)de> |
Cc: | Thomas Munro <thomas(dot)munro(at)gmail(dot)com>, Nazir Bilal Yavuz <byavuz81(at)gmail(dot)com>, Robert Haas <robertmhaas(at)gmail(dot)com>, Melanie Plageman <melanieplageman(at)gmail(dot)com>, PostgreSQL Hackers <pgsql-hackers(at)lists(dot)postgresql(dot)org>, Georgios <gkokolatos(at)protonmail(dot)com>, Konstantin Knizhnik <knizhnik(at)garret(dot)ru>, Dilip Kumar <dilipbalaut(at)gmail(dot)com> |
Subject: | Re: index prefetching |
Date: | 2025-08-13 21:19:29 |
Message-ID: | a9c0a5f9-cf5e-4ac9-abf6-683483161169@vondra.me |
Views: | Whole Thread | Raw Message | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-hackers |
On 8/13/25 18:01, Peter Geoghegan wrote:
> On Wed, Aug 13, 2025 at 11:28 AM Andres Freund <andres(at)anarazel(dot)de> wrote:
>>> With "sync" I always get this (after a restart):
>>>
>>> Buffers: shared hit=7435 read=52801
>>>
>>> while with "worker" I get this:
>>>
>>> Buffers: shared hit=4879 read=52801
>>> Buffers: shared hit=5151 read=52801
>>> Buffers: shared hit=4978 read=52801
>>>
>>> So not only it changes run to tun, it also does not add up to 60236.
>>
>> This is reproducible on master? If so, how?
>
> AFAIK it is *not* reproducible on master.
>
>> If this happens only with the prefetching patch applied, is is possible that
>> what happens here is that we occasionally re-request buffers that already in
>> the process of being read in? That would only happen with a read stream and
>> io_method != sync (since with sync we won't read ahead). If we have to start
>> reading in a buffer that's already undergoing IO we wait for the IO to
>> complete and count that access as a hit:
>
> This theory seems quite plausible to me. Though it is a bit surprising
> that I see incorrect buffer hit counts on the "good" forwards scan
> case, rather than on the "bad" backwards scan case.
>
> Here's what I mean by things being broken on the read stream side (at
> least with certain backwards scan cases):
>
> When I add instrumentation to the read stream side, by adding elog
> debug calls that show the blocknum seen by read_stream_get_block, I
> see out-of-order and repeated blocknums with the "bad" backwards scan
> case ("SELECT * FROM t WHERE a BETWEEN 16336 AND 49103 ORDER BY a
> desc"):
>
> ...
> NOTICE: index_scan_stream_read_next: index 1163 TID (25052,21)
> WARNING: prior lastBlock is 25053 for batchno 2856, new one: 25052
> WARNING: blocknum: 25052, 0x55614810efb0
> WARNING: blocknum: 25052, 0x55614810efb0
> NOTICE: index_scan_stream_read_next: index 1161 TID (25053,3)
> WARNING: prior lastBlock is 25052 for batchno 2856, new one: 25053
> WARNING: blocknum: 25053, 0x55614810efb0
> NOTICE: index_scan_stream_read_next: index 1160 TID (25052,19)
> WARNING: prior lastBlock is 25053 for batchno 2856, new one: 25052
> WARNING: blocknum: 25052, 0x55614810efb0
> WARNING: blocknum: 25052, 0x55614810efb0
> NOTICE: index_scan_stream_read_next: index 1141 TID (25051,21)
> WARNING: prior lastBlock is 25052 for batchno 2856, new one: 25051
> WARNING: blocknum: 25051, 0x55614810efb0
> ...
>
> Notice that we see the same blocknum twice in close succession. Also
> notice that we're passed 25052 and then subsequently passed 25053,
> only to be passed 25053 once more.
>
I did investigate this, and I don't think there's anything broken in
read_stream. It happens because ReadStream has a concept of "ungetting"
a block, which can happen after hitting some I/O limits.
In that case we "remember" the last block (in read_stream_look_ahead
calls read_stream_unget_block), and we return it again. It may seem as
if read_stream_get_block() produced the same block twice, but it's
really just the block from the last round.
All duplicates produced by read_stream_look_ahead were caused by this. I
suspected it's a bug in lastBlock optimization, but that's not the case,
it happens entirely within read_stream. And it's expected.
It's also not very surprising this happens with backwards scans more.
The I/O is apparently much slower (due to missing OS prefetch), so we're
much more likely to hit the I/O limits (max_ios and various other limits
in read_stream_start_pending_read).
regards
--
Tomas Vondra
From | Date | Subject | |
---|---|---|---|
Next Message | Peter Geoghegan | 2025-08-13 21:36:24 | Re: index prefetching |
Previous Message | Tomas Vondra | 2025-08-13 21:07:07 | Re: index prefetching |