Re: Parallel bitmap index scan

From: Thomas Munro <thomas(dot)munro(at)gmail(dot)com>
To: Dilip Kumar <dilipbalaut(at)gmail(dot)com>
Cc: pgsql-hackers <pgsql-hackers(at)postgresql(dot)org>
Subject: Re: Parallel bitmap index scan
Date: 2020-07-26 22:17:36
Message-ID: CA+hUKGLgxNHTnGEpK2oN3sOGt4EgzvJpM7jLZvBEAvjxTeoAtQ@mail.gmail.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-hackers

On Mon, Jul 27, 2020 at 1:58 AM Dilip Kumar <dilipbalaut(at)gmail(dot)com> wrote:
> On Sun, Jul 26, 2020 at 6:42 PM Dilip Kumar <dilipbalaut(at)gmail(dot)com> wrote:
> >
> > I would like to propose a patch for enabling the parallelism for the
> > bitmap index scan path.

Workers Planned: 4
-> Parallel Bitmap Heap Scan on tenk1
Recheck Cond: (hundred > 1)
- -> Bitmap Index Scan on tenk1_hundred
+ -> Parallel Bitmap Index Scan on tenk1_hundred
Index Cond: (hundred > 1)

+1, this is a very good feature to have.

+ /* Merge bitmap to a common
shared bitmap */
+ SpinLockAcquire(&pstate->mutex);
+ tbm_merge(tbm,
&pstate->tbm_shared, &pstate->pt_shared);
+ SpinLockRelease(&pstate->mutex);

This doesn't look like a job for a spinlock.

You have a barrier so that you can wait until all workers have
finished merging their partial bitmap into the complete bitmap, which
makes perfect sense. You also use that spinlock (probably should be
LWLock) to serialise the bitmap merging work... Hmm, I suppose there
would be an alternative design which also uses the barrier to
serialise the merge, and has the merging done entirely by one process,
like this:

bool chosen_to_merge = false;

/* Attach to the barrier, and see what phase we're up to. */
switch (BarrierAttach())
{
case PBH_BUILDING:
... build my partial bitmap in shmem ...
chosen_to_merge = BarrierArriveAndWait();
/* Fall through */

case PBH_MERGING:
if (chosen_to_merge)
... perform merge of all partial results into final shmem bitmap ...
BarrierArriveAndWait();
/* Fall through */

case PBH_SCANNING:
/* We attached too late to help build the bitmap. */
BarrierDetach();
break;
}

Just an idea, not sure if it's a good one. I find it a little easier
to reason about the behaviour of late-attaching workers when the
phases are explicitly named and handled with code like that (it's not
immediately clear to me whether your coding handles late attachers
correctly, which seems to be one of the main problems to think about
with "dynamic party" parallelism...).

In response to

Responses

Browse pgsql-hackers by date

  From Date Subject
Next Message David Rowley 2020-07-26 22:48:45 Re: HashAgg's batching counter starts at 0, but Hash's starts at 1.
Previous Message Nasby, Jim 2020-07-26 21:43:29 autovac issue with large number of tables