Jared Beck <jared(at)singlebrook(dot)com> writes:
> Hello postgres wizards,
> We recently upgraded from 8.1.5 to 8.4
> We have a query (slow_query.sql) which took about 9s on 8.1.5
> On 8.4, the same query takes 17.7 minutes.
One thing that is hobbling the performane on 8.4 is that you have
work_mem set to only 1MB (you had it considerably higher on 8.1).
This is causing that sort step to spill to disk, which isn't helping
its rescan performance one bit.
Other things you might try include increasing join_collapse_limit
to 12 or so, and reducing random_page_cost. The fact that the 8.1
plan didn't completely suck indicates that your database must be
mostly in cache, so the default random_page_cost is probably too high
to model its behavior well.
Another thing to look into is whether you can't get it to make a
better estimate for this:
-> Index Scan using index_tbldata_variableid on tbldata dv118488y0 (cost=0.00..5914.49 rows=8 width=22) (actual time=1.555..209.856 rows=16193 loops=1)
Index Cond: (variableid = 118488)
Filter: (castbooltoint((((value)::text ~ '^-?[0-9]*([0-9]+.|.[0-9]+)?[0-9]*([Ee][-+]d*)?$'::text) AND ((value)::text <> '-'::text))) = 1)
Being off by a factor of 2000 on a first-level rowcount estimate is
almost inevitably a ticket to a bad join plan. I doubt that the
condition on variableid is being that badly estimated; the problem is
the filter condition. Whatever possessed you to take a perfectly good
boolean condition and wrap it in "castbooltoint(condition) = 1"?
I'm not sure how good the estimate would be anyway for the LIKE
condition, but that bit of obscurantism isn't helping.
regards, tom lane
In response to
pgsql-performance by date
|Next:||From: Tom Lane||Date: 2009-09-24 02:53:08|
|Subject: Re: Use of sequence rather than index scan for one text column on one instance of a database |
|Previous:||From: Greg Williamson||Date: 2009-09-23 23:22:45|
|Subject: Re: Slow query after upgrade to 8.4|