Re: why not parallel seq scan for slow functions

From: Jeff Janes <jeff(dot)janes(at)gmail(dot)com>
To: Thomas Munro <thomas(dot)munro(at)enterprisedb(dot)com>
Cc: Amit Kapila <amit(dot)kapila16(at)gmail(dot)com>, Amit Khandekar <amitdkhan(dot)pg(at)gmail(dot)com>, Robert Haas <robertmhaas(at)gmail(dot)com>, Dilip Kumar <dilipbalaut(at)gmail(dot)com>, pgsql-hackers <pgsql-hackers(at)postgresql(dot)org>
Subject: Re: why not parallel seq scan for slow functions
Date: 2017-09-19 21:35:56
Message-ID: CAMkU=1xcn1W1MSEtDtb90JhC8phRnNA2Yc4hwQk-rEqQ8rkhbQ@mail.gmail.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-hackers

On Tue, Sep 19, 2017 at 1:17 PM, Thomas Munro <thomas(dot)munro(at)enterprisedb(dot)com
> wrote:

> On Thu, Sep 14, 2017 at 3:19 PM, Amit Kapila <amit(dot)kapila16(at)gmail(dot)com>
> wrote:
> > The attached patch fixes both the review comments as discussed above.
>
> This cost stuff looks unstable:
>
> test select_parallel ... FAILED
>
> ! Gather (cost=0.00..623882.94 rows=9976 width=8)
> Workers Planned: 4
> ! -> Parallel Seq Scan on tenk1 (cost=0.00..623882.94 rows=2494
> width=8)
> (3 rows)
>
> drop function costly_func(var1 integer);
> --- 112,120 ----
> explain select ten, costly_func(ten) from tenk1;
> QUERY PLAN
> ------------------------------------------------------------
> ----------------
> ! Gather (cost=0.00..625383.00 rows=10000 width=8)
> Workers Planned: 4
> ! -> Parallel Seq Scan on tenk1 (cost=0.00..625383.00 rows=2500
> width=8)
> (3 rows)
>

that should be fixed by turning costs on the explain, as is the tradition.

See attached.

Cheers,

Jeff

Attachment Content-Type Size
parallel_paths_include_tlist_cost_v4.patch application/octet-stream 14.0 KB

In response to

Responses

Browse pgsql-hackers by date

  From Date Subject
Next Message Jeff Janes 2017-09-19 21:55:52 Re: SCRAM in the PG 10 release notes
Previous Message Tom Lane 2017-09-19 21:34:21 Re: Re: issue: record or row variable cannot be part of multiple-item INTO list