Re: Parallel Seq Scan vs kernel read ahead

From: Amit Kapila <amit(dot)kapila16(at)gmail(dot)com>
To: David Rowley <dgrowleyml(at)gmail(dot)com>
Cc: Thomas Munro <thomas(dot)munro(at)gmail(dot)com>, Ranier Vilela <ranier(dot)vf(at)gmail(dot)com>, pgsql-hackers <pgsql-hackers(at)postgresql(dot)org>
Subject: Re: Parallel Seq Scan vs kernel read ahead
Date: 2020-06-11 04:03:17
Message-ID: CAA4eK1K3=dqpg5KaU=WWj=DSjYXFRS88o79xFP5XeLzC_ZHUeg@mail.gmail.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-hackers

On Thu, Jun 11, 2020 at 8:35 AM David Rowley <dgrowleyml(at)gmail(dot)com> wrote:
>
> On Thu, 11 Jun 2020 at 14:09, Amit Kapila <amit(dot)kapila16(at)gmail(dot)com> wrote:
> >
> > On Thu, Jun 11, 2020 at 7:18 AM David Rowley <dgrowleyml(at)gmail(dot)com> wrote:
> > >
> > > On Thu, 11 Jun 2020 at 01:24, Amit Kapila <amit(dot)kapila16(at)gmail(dot)com> wrote:
> > > > Can we try the same test with 4, 8, 16 workers as well? I don't
> > > > foresee any problem with a higher number of workers but it might be
> > > > better to once check that if it is not too much additional work.
> > >
> > > I ran the tests again with up to 7 workers. The CPU here only has 8
> > > cores (a laptop), so I'm not sure if there's much sense in going
> > > higher than that?
> > >
> >
> > I think it proves your point that there is a value in going for step
> > size greater than 64. However, I think the difference at higher sizes
> > is not significant. For example, the difference between 8192 and
> > 16384 doesn't seem much if we leave higher worker count where the data
> > could be a bit misleading due to variation. I could see that there is
> > a clear and significant difference till 1024 but after that difference
> > is not much.
>
> I guess the danger with going too big is that we have some Seqscan
> filter that causes some workers to do very little to nothing with the
> rows, despite discarding them and other workers are left with rows
> that are not filtered and require some expensive processing. Keeping
> the number of blocks on the smaller side would reduce the chances of
> someone being hit by that.
>

Right and good point.

> The algorithm I proposed above still can
> be capped by doing something like nblocks = Min(1024,
> pg_nextpower2_32(pbscan->phs_nblocks / 1024)); That way we'll end up
> with:
>

I think something on these lines would be a good idea especially
keeping step-size proportional to relation size. However, I am not
completely sure if doubling the step-size with equal increase in
relation size (ex. what is happening between 16MB~8192MB) is the best
idea. Why not double the step-size when relation size increases by
four times? Will some more tests help us to identify this? I also
don't know what is the right answer here so just trying to brainstorm.

--
With Regards,
Amit Kapila.
EnterpriseDB: http://www.enterprisedb.com

In response to

Responses

Browse pgsql-hackers by date

  From Date Subject
Next Message David Rowley 2020-06-11 04:43:05 Re: Parallel Seq Scan vs kernel read ahead
Previous Message David Rowley 2020-06-11 03:36:48 Re: Speedup usages of pg_*toa() functions