| From: | Chengpeng Yan <chengpeng_yan(at)outlook(dot)com> |
|---|---|
| To: | John Naylor <johncnaylorls(at)gmail(dot)com> |
| Cc: | Tomas Vondra <tomas(at)vondra(dot)me>, lakshmi <lakshmigcdac(at)gmail(dot)com>, Pavel Stehule <pavel(dot)stehule(at)gmail(dot)com>, "pgsql-hackers(at)lists(dot)postgresql(dot)org" <pgsql-hackers(at)lists(dot)postgresql(dot)org>, Robert Haas <robertmhaas(at)gmail(dot)com> |
| Subject: | Re: Add a greedy join search algorithm to handle large join problems |
| Date: | 2026-05-11 14:33:50 |
| Message-ID: | 4383D1E9-8F01-429E-9C18-1EFE12FF9196@outlook.com |
| Views: | Whole Thread | Raw Message | Download mbox | Resend email |
| Thread: | |
| Lists: | pgsql-hackers |
Hi,
> On May 5, 2026, at 13:40, John Naylor <johncnaylorls(at)gmail(dot)com> wrote:
>
> Thanks for those results. As Tomas mentioned above, evaluation should
> focus on cases where DP won't be used. Joins with a small number of
> relations aren't going to tell us anything, especially since (I think)
> GEGO will at times accidentally cover the entire seach space. Indeed,
> there are quite a few queries where GOO is worse than GEQO by around
> 2-3x, but also small enough to be handled by DP anyway, so reporting
> them is a distraction.
>
> Less than half of the JOB-Complex queries have at least 12 "joins"
> (BTW, is that number in the pdf actual joins or is it base
> relations?), but the results (and summary results) include small joins
> as well. Looking at the queries where one of GOO/GEQO is much better
> than the other do seem to happen with large join problems.
Thanks for taking a look and pointing this out. You're right. Sorry
about that. Including the smaller join queries in the summary made the
result harder to interpret, because those are normally handled by
standard DP search and are not the main target of this patch.
For the comparison below, I only use queries involving at least 12 base
relations, i.e. the default GEQO threshold boundary. The number shown in
the PDF is the number of base relations in the query's flat FROM list,
not the number of join operators. In JOB + JOB-Complex, 33 of the 143
queries meet that threshold: 20 from JOB and 13 from JOB-Complex.
I reran the v4 patch on those 33 queries, with DP / GEQO /
GOO(combined). This version does not include selectivity in the combined
candidate set; GOO(combined) only chooses between the GOO(cost) and
GOO(result_size) plans. The protocol and GUC settings are the same as
before: one warmup pass, three measured repetitions, median reported,
10-minute statement_timeout, and:
EXPLAIN (ANALYZE, TIMING OFF, SUMMARY ON, FORMAT JSON, SETTINGS ON)
The command checklist and run protocol are documented in the benchmark
repository [1][2]. I attached v4-job-benchmark-result-20260511.xlsx with
both planning and execution numbers, and
v4-job-benchmark-execution-result-20260511.pdf as a PDF view of the
execution-time sheet.
The detailed comparison below uses round1, which is the baseline
workbook attached here.
For planning time, GOO(combined) is still much cheaper than GEQO:
GEQO: 722 ms
GOO(combined): 108 ms
That is about 6.7x lower planning time for GOO(combined).
For execution time, compared with GEQO over the 33 comparable queries:
GOO(combined) faster by 20%-2x: 5 queries
GOO(combined) faster by 2x-10x: 4 queries
GOO(combined) faster by >10x: 4 queries
GOO(combined) within +/-20% of GEQO: 9 queries
GOO(combined) slower by 20%-2x: 5 queries
GOO(combined) slower by 2x-10x: 4 queries
GOO(combined) slower by >10x: 2 queries
timeouts: 0 for GEQO, 0 for GOO(combined)
On execution-time sum:
GEQO: 574582 ms
GOO(combined): 87706 ms
That is about 6.55x faster for GOO(combined) in this round, but the
distribution is not one-sided.
Some cases where GOO(combined) is much better than GEQO:
job_complex q19: 1125 ms vs 430902 ms, about 383x faster
job_complex q17: 975 ms vs 97105 ms, about 100x faster
job 26c: 880 ms vs 25859 ms, about 29x faster
job 29c: 31 ms vs 416 ms, about 14x faster
Some cases where GOO(combined) is much worse than GEQO:
job_complex q20: 1912 ms vs 78 ms, about 25x slower
job_complex q12: 71628 ms vs 3324 ms, about 22x slower
job 29a: 84 ms vs 9 ms, about 9.4x slower
job 30a: 1768 ms vs 423 ms, about 4.2x slower
So I would still describe the execution-time picture as mixed. Both
methods have bad cases.
>> GOO(combined) with cost and result_size may already be a useful baseline
>> or candidate generator, but the regressions above do not support making
>> it the default GEQO replacement. One possible advantage over GEQO is
>> that GOO may work better with pg_plan_advice, but I need to understand
>> the details better before making a stronger claim.
>
> Given that all heuristic join enumeration methods can produce
> spectacularly bad plans, the ability to influence the plan is more
> crucial with large join problems with small ones. Features should be
> orthogonal in general, and in this case, integrating well with plan
> advice seems like a strong deciding factor.
I agree. For large joins, cardinality estimation, the cost model, and
the join-order search all interact. Even a better search method can
choose a bad plan if the cardinality estimates or cost model give it
misleading input. So the ability to tune or influence the chosen plan
becomes especially important.
After reading more of the pg_plan_advice discussion and code, my current
understanding is that GOO should be easier than GEQO to integrate with
join order advice. Please correct me if this understanding is wrong. My
reading of the pg_plan_advice discussion is that advice can only
reliably nudge the planner towards plans it would have considered
anyway; with GEQO, not all join orders are explored, so join-order
advice can only constrain the options that the genetic search actually
considers [3][4].
GOO seems to have a useful property here: candidate generation is
explicit. At each contraction step it evaluates legal pair joins, so an
advice implementation could reject or prefer candidate pairs at that
point. This does not make impossible advice feasible, and it still would
need careful integration, but it seems less dependent on whether GEQO
happened to generate a compatible join sequence.
I also noticed a statistics-sensitivity issue while rerunning the
large-query subset. I ran five rounds over the same 33 queries and the
same DP / GEQO / GOO(combined) variants. Each round refreshed statistics
before executing the measured queries. I saved the statistics snapshot
with pg_dump --statistics-only and kept the per-round result
spreadsheets; these are in round_artifacts-20260511.zip, organized as
round1 through round5. The per-run measured repetitions are
comparatively stable; the large differences below correspond to changed
plans after refreshed statistics, not just measurement noise.
Execution-time sum:
| round | DP ms | GEQO ms | GOO(combined) ms |
| --- | ---: | ---: | ---: |
| round1 | 41091 | 574582 | 87706 |
| round2 | 41331 | 51205 | 85038 |
| round3 | 40493 | 95580 | 85561 |
| round4 | 41464 | 99228 | 83930 |
| round5 | 41611 | 126544 | 85578 |
GOO(combined) has the lower execution-time sum in four of the five
rounds. The exact bucket counts are not identical in every round, but
each round still has both large GOO(combined) wins and large
regressions. More importantly, GEQO varies much more across statistics
snapshots: its execution-time sum ranges from 51s to 575s, while
GOO(combined) stays in the 84s-88s range.
The large GEQO swings seem to follow this pattern: small changes in
sampled statistics change the estimated-cost ordering among close GEQO
join orders; some selected orders delay selective joins and produce much
larger intermediate results. For example, job_complex q19 ranges from
2.5s to 431s across the five rounds, job_complex q17 ranges from 0.96s
to 97s, job 26a ranges from 0.26s to 8.2s, and job 26c ranges from 0.65s
to 26s.
I do not think this means GOO(combined) is generally more stable. In
these runs, GEQO sometimes selected very different join orders after
refreshed statistics, while GOO(combined) changed less severely. One
possible reason is the different search shape: GOO(combined) compares a
small number of greedy completions, while GEQO searches among full join
sequences, where close estimated costs can lead to a very different
selected sequence. I would treat this as an observed behavior in this
benchmark rather than a general property of GOO.
My current summary is:
1. GOO(combined) still has much lower planning time than GEQO.
2. On execution time for large joins, both GEQO and GOO(combined) have
good and bad cases.
3. GOO may have a better path to work with plan advice, because the
search is deterministic and exposes explicit pair-join decision
points where advice could reject or prefer candidate pairs.
4. In this benchmark subset, GOO(combined) is more stable across
refreshed statistics, while GEQO is more sensitive to small
statistics changes.
As I mentioned in the previous message, the remaining GOO(combined)
regressions fall into two groups: sometimes neither cost nor result_size
finds a good greedy candidate, and sometimes a good greedy candidate
exists but the final estimated-cost selector chooses a slower plan.
So I am still looking at the two follow-up directions mentioned earlier,
both building on the current GOO work. One is to improve pure-GOO
candidate generation and the final selector, for example by adding more
diverse strategies and making the selector consider signals beyond final
estimated cost. The other is to use exact DP for a prefix of the problem
and fall back to GOO when the DP budget is not enough. The first
direction may need broader changes across planner or estimation-related
code, so I am currently focusing more on the second direction and will
share the code and numbers once I have a clearer result.
References:
[1] Benchmark reproduction notes:
https://github.com/Reminiscent/join-order-benchmark/blob/main/REPRODUCE.md
[2] Benchmark run protocol:
https://github.com/Reminiscent/join-order-benchmark/blob/main/BENCHMARK_RUNS.md
[3] pg_plan_advice / GEQO discussion:
https://www.postgresql.org/message-id/CA%2BTgmoYD55R8XqgbFbKhC4Uipu-gxA1msDOsUAZ9q38iCN8dQA%40mail.gmail.com
[4] Same pg_plan_advice thread, GEQO note:
https://www.postgresql.org/message-id/CA%2BTgmoZpQDJOz_W34Wkp-JA%3DMQpzLeV6dsDGt%3D04U0AD6c65RA%40mail.gmail.com
--
Best regards,
Chengpeng Yan
| Attachment | Content-Type | Size |
|---|---|---|
| v4-job-benchmark-result-20260511.pdf | application/pdf | 23.2 KB |
| v4-job-benchmark-result-20260511.xlsx | application/vnd.openxmlformats-officedocument.spreadsheetml.sheet | 11.8 KB |
| round_artifacts-20260511.zip | application/zip | 438.9 KB |
| From | Date | Subject | |
|---|---|---|---|
| Previous Message | cca5507 | 2026-05-11 13:19:33 | Re: Question about hashed ScalarArrayOpExpr equality semantics |