Re: why not parallel seq scan for slow functions

From: Dilip Kumar <dilipbalaut(at)gmail(dot)com>
To: Amit Kapila <amit(dot)kapila16(at)gmail(dot)com>
Cc: Robert Haas <robertmhaas(at)gmail(dot)com>, Jeff Janes <jeff(dot)janes(at)gmail(dot)com>, pgsql-hackers <pgsql-hackers(at)postgresql(dot)org>
Subject: Re: why not parallel seq scan for slow functions
Date: 2017-08-17 08:39:23
Message-ID: CAFiTN-tsf9cwpnYVrFPnHJHgxaJp1oYxoFok4KOZu9-sXAU9zA@mail.gmail.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-hackers

On Sat, Aug 12, 2017 at 6:48 PM, Amit Kapila <amit(dot)kapila16(at)gmail(dot)com> wrote:
> On Thu, Aug 10, 2017 at 1:07 AM, Robert Haas <robertmhaas(at)gmail(dot)com> wrote:
>> On Tue, Aug 8, 2017 at 3:50 AM, Amit Kapila <amit(dot)kapila16(at)gmail(dot)com> wrote:
>>> Right.
>>>
>
> I think skipping a generation of gather paths for scan node or top
> level join node generated via standard_join_search seems straight
> forward, but skipping for paths generated via geqo seems to be tricky
> (See use of generate_gather_paths in merge_clump).

Either we can pass "num_gene" to merge_clump or we can store num_gene
in the root. And inside merge_clump we can check. Do you see some more
complexity?

if (joinrel)

{
/* Create GatherPaths for any useful partial paths for rel */
if (old_clump->size + new_clump->size < num_gene)
generate_gather_paths(root, joinrel);

}

Assuming, we find
> some way to skip it for top level scan/join node, I don't think that
> will be sufficient, we have some special way to push target list below
> Gather node in apply_projection_to_path, we need to move that part as
> well in generate_gather_paths.
>

--
Regards,
Dilip Kumar
EnterpriseDB: http://www.enterprisedb.com

In response to

Responses

Browse pgsql-hackers by date

  From Date Subject
Next Message Amit Khandekar 2017-08-17 08:42:07 Re: Parallel Append implementation
Previous Message Beena Emerson 2017-08-17 08:34:25 Re: Default Partition for Range