From: | "Jorge Montero" <jorge_montero(at)homedecorators(dot)com> |
---|---|
To: | <pgsql-performance(at)postgresql(dot)org> |
Subject: | Re: Re: 512,600ms query becomes 7500ms... but why? Postgres 8.3 query planner quirk? |
Date: | 2010-02-10 23:18:36 |
Message-ID: | 4B72EA6C.2E1C.0042.0@homedecorators.com |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-performance |
That sure looks like the source of the problem to me too. I've seen similar behavior in queries not very different from that. It's hard to guess what the problem is exactly without having more knowledge of the data distribution in article_words though.
Given the results of analyze, I'd try to run the deepest subquery and try to see if I could get the estimate to match reality, either by altering statistics targets, or tweaking the query to give more information to the planner.
For example, i'd check if the number of expected rows from
SELECT context_key FROM article_words JOIN words using (word_key) WHERE word = 'insider'
is much less accurate than the estimate for
SELECT context_key FROM article_words WHERE word_key = (whatever the actual word_key for insider is)
>>> Robert Haas <robertmhaas(at)gmail(dot)com> 02/10/10 2:31 PM >>>
On Wed, Feb 10, 2010 at 3:29 AM, Bryce Nesbitt <bryce2(at)obviously(dot)com> wrote:
> Or, if you want to actually read that query plan, try:
> http://explain.depesz.com/s/qYq
Much better, though I prefer a text attachment... anyhow, I think the
root of the problem may be that both of the subquery scans under the
append node are seeing hundreds of times more rows than they're
expecting, which is causing the planner to choose nested loops higher
up that it otherwise might have preferred to implement in some other
way. I'm not quite sure why, though.
...Robert
From | Date | Subject | |
---|---|---|---|
Next Message | Jon Lewison | 2010-02-10 23:30:54 | Re: perf problem with huge table |
Previous Message | Dave Crooke | 2010-02-10 23:16:14 | Re: perf problem with huge table |