Re: Parallel Append implementation

From: Ashutosh Bapat <ashutosh(dot)bapat(at)enterprisedb(dot)com>
To: Andres Freund <andres(at)anarazel(dot)de>
Cc: Robert Haas <robertmhaas(at)gmail(dot)com>, Amit Khandekar <amitdkhan(dot)pg(at)gmail(dot)com>, pgsql-hackers <pgsql-hackers(at)postgresql(dot)org>
Subject: Re: Parallel Append implementation
Date: 2017-04-05 02:31:01
Message-ID: CAFjFpRcgKgsebLLi0A3C2eXsYWma=bsLE+3ugKaZEeTC5pFhHw@mail.gmail.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-hackers

On Wed, Apr 5, 2017 at 1:43 AM, Andres Freund <andres(at)anarazel(dot)de> wrote:

> On 2017-04-04 08:01:32 -0400, Robert Haas wrote:
> > On Tue, Apr 4, 2017 at 12:47 AM, Andres Freund <andres(at)anarazel(dot)de>
> wrote:
> > > I don't think the parallel seqscan is comparable in complexity with the
> > > parallel append case. Each worker there does the same kind of work,
> and
> > > if one of them is behind, it'll just do less. But correct sizing will
> > > be more important with parallel-append, because with non-partial
> > > subplans the work is absolutely *not* uniform.
> >
> > Sure, that's a problem, but I think it's still absolutely necessary to
> > ramp up the maximum "effort" (in terms of number of workers)
> > logarithmically. If you just do it by costing, the winning number of
> > workers will always be the largest number that we think we'll be able
> > to put to use - e.g. with 100 branches of relatively equal cost we'll
> > pick 100 workers. That's not remotely sane.
>
> I'm quite unconvinced that just throwing a log() in there is the best
> way to combat that. Modeling the issue of starting more workers through
> tuple transfer, locking, startup overhead costing seems a better to me.
>
> If the goal is to compute the results of the query as fast as possible,
> and to not use more than max_parallel_per_XXX, and it's actually
> beneficial to use more workers, then we should. Because otherwise you
> really can't use the resources available.
>

+1. I had expressed similar opinion earlier, but yours is better
articulated. Thanks.

--
Best Wishes,
Ashutosh Bapat
EnterpriseDB Corporation
The Postgres Database Company

In response to

Browse pgsql-hackers by date

  From Date Subject
Next Message Tsunakawa, Takayuki 2017-04-05 02:37:28 Re: Supporting huge pages on Windows
Previous Message Michael Paquier 2017-04-05 02:30:53 Re: Rewriting the test of pg_upgrade as a TAP test