Re: Parallel query execution introduces performance regressions

From: David Rowley <david(dot)rowley(at)2ndquadrant(dot)com>
To: Andres Freund <andres(at)anarazel(dot)de>
Cc: Peter Geoghegan <pg(at)bowt(dot)ie>, Jinho Jung <jinhojun(at)usc(dot)edu>, PostgreSQL mailing lists <pgsql-bugs(at)lists(dot)postgresql(dot)org>
Subject: Re: Parallel query execution introduces performance regressions
Date: 2019-04-02 03:21:00
Message-ID: CAKJS1f_oU=oWdPgKcOA4P53WQA6djuS4xdCaLLvm4qxVZmo2CA@mail.gmail.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-bugs

On Tue, 2 Apr 2019 at 08:00, Andres Freund <andres(at)anarazel(dot)de> wrote:
>
> On 2019-04-01 11:52:54 -0700, Peter Geoghegan wrote:
> > On Mon, Apr 1, 2019 at 11:30 AM Jinho Jung <jinhojun(at)usc(dot)edu> wrote:
> > > Surprisingly, we found that even on a larger TPC-C database (scale factor of 50, roughly 4GB of size), parallel scan is still slower than the non-parallel execution plan in the old version.
> >
> > That's not a large database, and it's certainly not a large TPC-C
> > database. If you attempt to stay under the spec's maximum
> > tpmC/throughput per warehouse, which is 12.86 tpmC per warehouse, then
> > you'll need several thousand warehouses on modern hardware. We're
> > talking several hundred gigabytes. Otherwise, as far as the spec is
> > concerned you're testing an unrealistic workload. There will be
> > individual customers that make many more purchases than is humanly
> > possible. You're modelling an app involving hypothetical warehouse
> > employees that must enter data into their terminals at a rate that is
> > not humanly possible.
>
> I don't think that's really the problem here. It's that there's a
> fundamental misestimation in the query:
>
> > [OLD version]
> > Nested Loop Semi Join (cost=0.00..90020417940.08 rows=30005835 width=8)
> > (actual time=0.034..24981.895 rows=90017507 loops=1)
> > Join Filter: (ref_0.ol_d_id <= ref_1.i_im_id)
> > -> Seq Scan on order_line ref_0 (cost=0.00..2011503.04 rows=90017504
> > width=12) (actual time=0.022..7145.811 rows=90017507 loops=1)
> > -> Materialize (cost=0.00..2771.00 rows=100000 width=4) (actual
> > time=0.000..0.000 rows=1 loops=90017507)
> > -> Seq Scan on item ref_1 (cost=0.00..2271.00 rows=100000 width=4)
> > (actual time=0.006..0.006 rows=1 loops=1)
>
> note the estimated rows=100000 vs the actual rows=1 in the seqscan /
> materialize. That's what makes the planner think this is much more
> expensive than it is, which in turn triggers the use of a parallel scan.

It's true that there's a bad row estimate here, but short of fixing
that, it seems we could fix it by having try_partial_nestloop_path()
check if a Materialize node would help save rescanning costs of the
inner plan and applying that if it does. The planner does think that
it's going to have to perform the inner seq scan 37507293 * 3 times so
materializing is likely to help.

--
David Rowley http://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Training & Services

In response to

Browse pgsql-bugs by date

  From Date Subject
Next Message ANASTACIO Tiago 2019-04-02 09:59:14 Re: BUG #15726: parallel queries failed ERROR: invalid name syntax CONTEXT: parallel worker
Previous Message Peter Geoghegan 2019-04-02 00:41:26 Re: BUG #15726: parallel queries failed ERROR: invalid name syntax CONTEXT: parallel worker