Re: Improve choose_custom_plan for initial partition prune case

From: Andy Fan <zhihui(dot)fan1213(at)gmail(dot)com>
To: Amit Langote <amitlangote09(at)gmail(dot)com>
Cc: PostgreSQL Hackers <pgsql-hackers(at)lists(dot)postgresql(dot)org>, Ashutosh Bapat <ashutosh(dot)bapat(dot)oss(at)gmail(dot)com>
Subject: Re: Improve choose_custom_plan for initial partition prune case
Date: 2020-10-07 08:39:26
Message-ID: CAKU4AWrWSCFO5fh01GTnN+1T8K8MyVAi4Gw-TvYC-Vhx3JohUw@mail.gmail.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-hackers

On Wed, Oct 7, 2020 at 2:43 PM Andy Fan <zhihui(dot)fan1213(at)gmail(dot)com> wrote:

>
>> 2. Associate them with RelationOid, and we can record such information in
>> the
>> Append node as well. The bad part is the same relation Oid may appear
>> multiple
>> times in a query. for example: SELECT .. FROM p p1, p p2 where
>> p1.partkey1 = $1
>> AND p2.partkey2 = $2;
>>
>>
> I just came up with a new idea. Since this situation should be rare, we
> can just come back
> to our original method (totally ignore the cost reduction) or use the
> average number. Fixing
> the 99% cases would be a big winner as well IMO.
>
>>
>>
I just uploaded a runnable patch for this idea, but it looks like the
design is wrong
at the beginning. for example:

Nest Loop:
Append
p_1
p_2
inner

The patch only reduces the cost of the Append node, but in fact,
since the loop count
of the inner plan is reduced as well, such cost should be reduced as well.
However even
we can reduce the cost for joining, that is not a smart solution as well,
since
the generic plan itself might be wrong as well at the beginning.

--
Best Regards
Andy Fan

Attachment Content-Type Size
v1-0001-Reduce-some-generic-plan-cost-by-adjusting-the-Ap.patch application/octet-stream 24.2 KB

In response to

Browse pgsql-hackers by date

  From Date Subject
Next Message Amit Kapila 2020-10-07 08:52:19 Re: Resetting spilled txn statistics in pg_stat_replication
Previous Message Greg Nancarrow 2020-10-07 08:25:44 Re: Parallel INSERT (INTO ... SELECT ...)