Re: BitmapHeapScan streaming read user and prelim refactoring

From: Andres Freund <andres(at)anarazel(dot)de>
To: Melanie Plageman <melanieplageman(at)gmail(dot)com>
Cc: Tomas Vondra <tomas(dot)vondra(at)enterprisedb(dot)com>, Heikki Linnakangas <hlinnaka(at)iki(dot)fi>, Dilip Kumar <dilipbalaut(at)gmail(dot)com>, Robert Haas <robertmhaas(at)gmail(dot)com>, Nazir Bilal Yavuz <byavuz81(at)gmail(dot)com>, Pg Hackers <pgsql-hackers(at)postgresql(dot)org>, Thomas Munro <thomas(dot)munro(at)gmail(dot)com>
Subject: Re: BitmapHeapScan streaming read user and prelim refactoring
Date: 2024-03-16 19:12:26
Message-ID: 20240316191226.b3c2vodhs23ibom4@alap3.anarazel.de
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-hackers

Hi,

On 2024-03-15 18:42:29 -0400, Melanie Plageman wrote:
> On Fri, Mar 15, 2024 at 5:14 PM Andres Freund <andres(at)anarazel(dot)de> wrote:
> > On 2024-03-14 17:39:30 -0400, Melanie Plageman wrote:
> > I spent a good amount of time looking into this with Melanie. After a bunch of
> > wrong paths I think I found the issue: We end up prefetching blocks we have
> > already read. Notably this happens even as-is on master - just not as
> > frequently as after moving BitmapAdjustPrefetchIterator().
> >
> > From what I can tell the prefetching in parallel bitmap heap scans is
> > thoroughly broken. I added some tracking of the last block read, the last
> > block prefetched to ParallelBitmapHeapState and found that with a small
> > effective_io_concurrency we end up with ~18% of prefetches being of blocks we
> > already read! After moving the BitmapAdjustPrefetchIterator() to rises to 86%,
> > no wonder it's slower...
> >
> > The race here seems fairly substantial - we're moving the two iterators
> > independently from each other, in multiple processes, without useful locking.
> >
> > I'm inclined to think this is a bug we ought to fix in the backbranches.
>
> Thinking about how to fix this, perhaps we could keep the current max
> block number in the ParallelBitmapHeapState and then when prefetching,
> workers could loop calling tbm_shared_iterate() until they've found a
> block at least prefetch_pages ahead of the current block. They
> wouldn't need to read the current max value from the parallel state on
> each iteration. Even checking it once and storing that value in a
> local variable prevented prefetching blocks after reading them in my
> example repro of the issue.

That would address some of the worst behaviour, but it doesn't really seem to
address the underlying problem of the two iterators being modified
independently. ISTM the proper fix would be to protect the state of the
iterators with a single lock, rather than pushing down the locking into the
bitmap code. OTOH, we'll only need one lock going forward, so being economic
in the effort of fixing this is also important.

Greetings,

Andres Freund

In response to

Responses

Browse pgsql-hackers by date

  From Date Subject
Next Message Andrew Dunstan 2024-03-16 19:49:27 Re: Regression tests fail with musl libc because libpq.so can't be loaded
Previous Message Andres Freund 2024-03-16 19:10:14 Re: Vectored I/O in bulk_write.c