From: | Noah Misch <noah(at)leadboat(dot)com> |
---|---|
To: | Kevin Grittner <Kevin(dot)Grittner(at)wicourts(dot)gov> |
Cc: | Robert Haas <robertmhaas(at)gmail(dot)com>, pgsql-hackers(at)postgresql(dot)org, Andres Freund <andres(at)anarazel(dot)de> |
Subject: | Re: *_collapse_limit, geqo_threshold |
Date: | 2009-07-08 13:43:12 |
Message-ID: | 20090708134312.GA25604@tornado.leadboat.com |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-hackers |
On Tue, Jul 07, 2009 at 09:31:14AM -0500, Kevin Grittner wrote:
> I don't remember any clear resolution to the wild variations in plan
> time mentioned here:
>
> http://archives.postgresql.org/pgsql-hackers/2009-06/msg00743.php
>
> I think it would be prudent to try to figure out why small changes in
> the query caused the large changes in the plan times Andres was
> seeing. Has anyone else ever seen such behavior? Can we get
> examples? (It should be enough to get the statistics and the schema,
> since this is about planning time, not run time.)
With joins between statistically indistinguishable columns, I see planning times
change by a factor of ~4 for each join added or removed (postgres 8.3). Varying
join_collapse_limit in the neighborhood of the actual number of joins has a
similar effect. See attachment with annotated timings. The example uses a
single table joined to itself, but using distinct tables with identical contents
yields the same figures.
The expontential factor seems smaller for real queries. I have a query of
sixteen joins that takes 71s to plan deterministically; it looks like this:
SELECT 1 FROM fact JOIN dim0 ... JOIN dim6
JOIN t t0 ON fact.key = t.key AND t.x = MCV0
LEFT JOIN t t1 ON fact.key = t.key AND t.x = MCV1
JOIN t t2 ON fact.key = t.key AND t.x = MCV2
LEFT JOIN t t3 ON fact.key = t.key AND t.x = NON-MCV0
LEFT JOIN t t4 ON fact.key = t.key AND t.x = NON-MCV1
LEFT JOIN t t5 ON fact.key = t.key AND t.x = NON-MCV2
LEFT JOIN t t6 ON fact.key = t.key AND t.x = NON-MCV3
LEFT JOIN t t7 ON fact.key = t.key AND t.x = NON-MCV4
For the real query, removing one join drops plan time to 26s, and removing two
drops the time to 11s. I don't have a good theory for the multiplier changing
from 4 for the trivial demonstration to ~2.5 for this real query. Re-enabling
geqo drops plan time to .5s. These tests used default_statistics_target = 1000,
but dropping that to 100 does not change anything dramatically.
> I guess the question is whether there is anyone who has had a contrary
> experience. (There must have been some benchmarks to justify adding
> geqo at some point?)
I have queries with a few more joins (19-21), and I cancelled attempts to plan
them deterministically after 600+ seconds and 10+ GiB of memory usage. Even
with geqo_effort = 10, they plan within 5-15s with good results.
All that being said, I've never encountered a situation where a value other than
1 or <inf> for *_collapse_limit appeared optimal.
nm
Attachment | Content-Type | Size |
---|---|---|
planner-timings.sql | text/plain | 2.5 KB |
From | Date | Subject | |
---|---|---|---|
Next Message | Peter Eisentraut | 2009-07-08 13:47:49 | Re: pgxs and make check message |
Previous Message | Pavel Stehule | 2009-07-08 09:29:46 | Re: bytea vs. pg_dump |