RE: Planning time of Generic plan for a table partitioned into a lot

From: "Kato, Sho" <kato-sho(at)jp(dot)fujitsu(dot)com>
To: 'David Rowley' <david(dot)rowley(at)2ndquadrant(dot)com>
Cc: PostgreSQL Hackers <pgsql-hackers(at)lists(dot)postgresql(dot)org>
Subject: RE: Planning time of Generic plan for a table partitioned into a lot
Date: 2018-11-29 07:17:20
Message-ID: 25C1C6B2E7BE044889E4FE8643A58BA963D8CDE1@G01JPEXMBKW03
Views: Raw Message | Whole Thread | Download mbox
Thread:
Lists: pgsql-hackers

Hi, David

On Wednesday, November 28, 2018 1:23 PM David Rowley wrote:
> > I found that making a generic plan of SELECT/UPDATE/DELETE for a table
> partitioned into thousands is slow.
> > Especially, UPDATE/DELETE statement is too slow.
>
> It's quite well known and also documented [1] that this is slow. The manual
> reads:

Thanks for the detailed explanation and sorry for my lack of explanation.
What I am worried about is that if users don't know the flow of PREPARE EXECUTE, query execution seems to be suddenly slow and they will be in trouble.
Query execution seems no problem because choose_custom_plan() compare the costs of custom plans and generic plans and select a lower cost plan.
Actually, selected custom plan is faster than generic plan.
However, since a generic plan is made at sixth execution, it looks slow as a whole.

postgres=# prepare update_stmt(int) as update t set id = id + 1 where id = $1;
PREPARE
postgres=# \timing
Timing is on.
postgres=# execute update_stmt(8192);
UPDATE 0
Time: 25.702 ms
postgres=# execute update_stmt(8192);
UPDATE 0
Time: 13.641 ms
postgres=# execute update_stmt(8192);
UPDATE 0
Time: 13.380 ms
postgres=# execute update_stmt(8192);
UPDATE 0
Time: 8.508 ms
postgres=# execute update_stmt(8192);
UPDATE 0
Time: 12.162 ms
postgres=# execute update_stmt(8192);
UPDATE 0
Time: 26517.487 ms (00:26.517)
postgres=# execute update_stmt(8192);
UPDATE 0
Time: 13.208 ms

>Although I'm not
> sure this will do much for you SELECT case, since you're not pruning any
> partitions during planning.

Umm. enable_partition_pruning is on(default). Is there any settings?

> Since you mentioned the plan_cache_mode GUC, then I assume you're not
> talking about any version of PostgreSQL that's been released, so if you're
> looking for a way to make it faster in master then I'd suggest helping
> with the review of [2]. If that patch does not meet your needs then also
> help Thomas with [3]. If that's still not good enough then you might need
> to do some research yourself. perf is your friend there.

Yes, I look for a way to make it faster in master.
Thanks for your advice. I'll take a look.
I understand that since there is no run-time partition pruning of UPDATE/DELETE yet, planner creates plans for all partitions with force_generic_plan, which takes time to plan.

Regards,
Sho Kato
> -----Original Message-----
> From: David Rowley [mailto:david(dot)rowley(at)2ndquadrant(dot)com]
> Sent: Wednesday, November 28, 2018 1:23 PM
> To: Kato, Sho/加藤 翔 <kato-sho(at)jp(dot)fujitsu(dot)com>
> Cc: PostgreSQL Hackers <pgsql-hackers(at)lists(dot)postgresql(dot)org>
> Subject: Re: Planning time of Generic plan for a table partitioned into
> a lot
>
> On Tue, 27 Nov 2018 at 23:05, Kato, Sho <kato-sho(at)jp(dot)fujitsu(dot)com> wrote:
> > I found that making a generic plan of SELECT/UPDATE/DELETE for a table
> partitioned into thousands is slow.
> > Especially, UPDATE/DELETE statement is too slow.
>
> It's quite well known and also documented [1] that this is slow. The manual
> reads:
>
> "Currently, pruning of partitions during the planning of an UPDATE or
> DELETE command is implemented using the constraint exclusion method
> (however, it is controlled by the enable_partition_pruning rather than
> constraint_exclusion) — see the following section for details and caveats
> that apply."
>
> and later on the same page:
>
> "All constraints on all children of the parent table are examined during
> constraint exclusion, so large numbers of children are likely to increase
> query planning time considerably. So the legacy inheritance based
> partitioning will work well with up to perhaps a hundred child tables;
> don't try to use many thousands of children."
>
> That documentation should be getting adjusted by [2] as that patch aims
> to improve the performance of UPDATE/DELETE planning, and also improve
> planning performance for when partitions are pruned. Although I'm not
> sure this will do much for you SELECT case, since you're not pruning any
> partitions during planning. There's been a discussion in [3] about
> improving the performance of determining the relation's size, which is
> known to be quite a bottleneck when generating a plan which includes a
> partitioned table with a large number of partitions.
>
> > I'm afraid that I could not come up with a good idea, but how can I
> shorten the creation time of a generic plan?
>
> Since you mentioned the plan_cache_mode GUC, then I assume you're not
> talking about any version of PostgreSQL that's been released, so if you're
> looking for a way to make it faster in master then I'd suggest helping
> with the review of [2]. If that patch does not meet your needs then also
> help Thomas with [3]. If that's still not good enough then you might need
> to do some research yourself. perf is your friend there.
>
> [1]
> https://www.postgresql.org/docs/devel/ddl-partitioning.html#DDL-PART
> ITIONING-DECLARATIVE
> [2] https://commitfest.postgresql.org/20/1778/
> [3]
> https://www.postgresql.org/message-id/flat/CAMyN-kCPin_stCMoXCVCq5J5
> 57e9-WEFPZTqdpO3j8wzoNVwNQ%40mail.gmail.com#e085c43b597b2775326afd9f
> 3a2b6591
>
> --
> David Rowley http://www.2ndQuadrant.com/
> PostgreSQL Development, 24x7 Support, Training & Services

In response to

Browse pgsql-hackers by date

  From Date Subject
Next Message Michael Paquier 2018-11-29 07:20:24 Re: Fixes for missing schema qualifications
Previous Message Michael Paquier 2018-11-29 07:05:22 Re: test_pg_dump missing cleanup actions