Peter Hussey <peter(at)labkey(dot)com> writes:
> Using the default of 1MB work_mem, the planner chooses a hash join plan :
> "Hash Left Join (cost=252641.82..11847353.87 rows=971572 width=111) (actual
> time=124196.670..280461.604 rows=968080 loops=1)"
> For the same default 1MB work_mem, a nested loop plan is better
> "Nested Loop Left Join (cost=8.27..15275401.19 rows=971572 width=111)
> (actual time=145.015..189957.023 rows=968080 loops=1)"
Hm. A nestloop with nearly a million rows on the outside is pretty
scary. The fact that you aren't unhappy with that version of the plan,
rather than the hash, indicates that the "object" table must be
fully cached in memory, otherwise the repeated indexscans would be a
lot slower than this:
> " -> Index Scan using uq_object on object obj (cost=0.00..3.51 rows=1
> width=95) (actual time=0.168..0.170 rows=1 loops=968080)"
> " Index Cond: ((sd.lsid)::text = (obj.objecturi)::text)"
My take on it is that the estimate of the hash plan's cost isn't bad;
what's bad is that the planner is mistakenly estimating the nestloop as
being worse. What you need to do is adjust the planner's cost
parameters so that it has a better idea of the true cost of repeated
index probes in your environment. Crank up effective_cache_size if
you didn't already, and experiment with lowering random_page_cost.
See the list archives for more discussion of these parameters.
regards, tom lane
In response to
pgsql-performance by date
|Next:||From: Eric Schwarzenbach||Date: 2010-07-30 14:55:32|
|Subject: view columns and performance|
|Previous:||From: Vincenzo Romano||Date: 2010-07-30 11:40:31|
|Subject: Re: On Scalability|