Re: Max sane value for join_collapse_limit?

From: Philip Semanchuk <philip(at)americanefficient(dot)com>
To: Andreas Joseph Krogh <andreas(at)visena(dot)com>
Cc: pgsql-general(at)lists(dot)postgresql(dot)org
Subject: Re: Max sane value for join_collapse_limit?
Date: 2022-06-03 15:11:14
Message-ID: 8A65D2C3-528A-448C-AEBD-FFAAB91024AC@americanefficient.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-general

> On Jun 3, 2022, at 4:19 AM, Andreas Joseph Krogh <andreas(at)visena(dot)com> wrote:
>
> Hi, I have set join_collapse_limit = 12 in production, but I'm thinking about raising it to 16.
> On modern HW is there a “sane maximum” for this value?
> I can easily spare 10ms for extra planning per query on our workload, is 16 too high?

I set ours set to 24 (from_collapse_limit=24 and geqo_threshold=25). Most of our queries that involve that involve 10+ relations have a slow execution time (20-30 minutes or more) so reducing planning time isn’t a major concern for us. If the planner takes an extra 20-30 seconds to find a plan that reduces execution time by 5%, we still come out ahead.

That said, in our environment the planner can make pretty bad choices once the number of relations into the mid teens because we have some difficult-to-estimate join conditions, so we write our canned queries with this in mind, breaking them into two parts if necessary to avoid throwing too much at the planner at once. IOW, we generally don’t come anywhere near 24 relations in a query. Our very high join_collapse_limit might still come into play if a user writes a very complicated ad hoc query.

So (IMHO) as is often the case, the answer is “it depends”. :-)

Cheers
Philip

In response to

Browse pgsql-general by date

  From Date Subject
Next Message Jeff Ross 2022-06-03 17:10:29 Re: Logically replicated table has no visible rows
Previous Message Amit Kapila 2022-06-03 11:15:36 Re: Support logical replication of DDLs