Re: BitmapHeapScan streaming read user and prelim refactoring

From: Melanie Plageman <melanieplageman(at)gmail(dot)com>
To: Andres Freund <andres(at)anarazel(dot)de>
Cc: Tomas Vondra <tomas(dot)vondra(at)enterprisedb(dot)com>, Heikki Linnakangas <hlinnaka(at)iki(dot)fi>, Dilip Kumar <dilipbalaut(at)gmail(dot)com>, Robert Haas <robertmhaas(at)gmail(dot)com>, Nazir Bilal Yavuz <byavuz81(at)gmail(dot)com>, Pg Hackers <pgsql-hackers(at)postgresql(dot)org>, Thomas Munro <thomas(dot)munro(at)gmail(dot)com>
Subject: Re: BitmapHeapScan streaming read user and prelim refactoring
Date: 2024-03-15 22:42:29
Message-ID: CAAKRu_aCjdrGKZYug+jsqKxGX_AksKO6=AW3fAm3csbhhzQevg@mail.gmail.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-hackers

On Fri, Mar 15, 2024 at 5:14 PM Andres Freund <andres(at)anarazel(dot)de> wrote:
>
> Hi,
>
> On 2024-03-14 17:39:30 -0400, Melanie Plageman wrote:
> > I will soon send out a summary of what we investigated off-list about
> > 0010 (though we didn't end up concluding anything). My "fix" (leaving
> > BitmapAdjustPrefetchIterator() above table_scan_bitmap_next_block())
> > eliminates the regression in 0010 on the one example that I repro'd
> > upthread, but it would be good to know if it eliminates the
> > regressions across some other tests.
>
> I spent a good amount of time looking into this with Melanie. After a bunch of
> wrong paths I think I found the issue: We end up prefetching blocks we have
> already read. Notably this happens even as-is on master - just not as
> frequently as after moving BitmapAdjustPrefetchIterator().
>
> From what I can tell the prefetching in parallel bitmap heap scans is
> thoroughly broken. I added some tracking of the last block read, the last
> block prefetched to ParallelBitmapHeapState and found that with a small
> effective_io_concurrency we end up with ~18% of prefetches being of blocks we
> already read! After moving the BitmapAdjustPrefetchIterator() to rises to 86%,
> no wonder it's slower...
>
> The race here seems fairly substantial - we're moving the two iterators
> independently from each other, in multiple processes, without useful locking.
>
> I'm inclined to think this is a bug we ought to fix in the backbranches.

Thinking about how to fix this, perhaps we could keep the current max
block number in the ParallelBitmapHeapState and then when prefetching,
workers could loop calling tbm_shared_iterate() until they've found a
block at least prefetch_pages ahead of the current block. They
wouldn't need to read the current max value from the parallel state on
each iteration. Even checking it once and storing that value in a
local variable prevented prefetching blocks after reading them in my
example repro of the issue.

- Melanie

In response to

Responses

Browse pgsql-hackers by date

  From Date Subject
Next Message Tom Lane 2024-03-15 23:18:41 Re: pg_upgrade failing for 200+ million Large Objects
Previous Message Jeff Davis 2024-03-15 22:30:51 Re: Statistics Import and Export