>Since you have two redundant tests, the selectivity is being
>double-counted, leading to a too-small rows estimate and a not very
>appropriate choice of join plan.
I see, thanks for the explanation. I did notice though that in the second
case, with 1 redundant test removed, the estimate is still low:
"Hash Left Join (cost=1449.99..2392.68 rows=2 width=714) (actual
time=24.257..25.292 rows=553 loops=1)"
In that case the prediction is 2 rows, which is only 1 row more than in the
previous case. Yet the plan is much better and performance improved
dramatically. Is there a reason/explanation for that?
>FWIW, CVS HEAD does get rid of the duplicate conditions for the common
>case of mergejoinable equality operators --- but it's not explicitly
>looking for duplicate conditions, rather this is falling out of a new
>method for making transitive equality deductions.
This sounds very interesting Tom. Is there some documentation somewhere
where I can read about this new method?
Live Search, for accurate results! http://www.live.nl
pgsql-performance by date
|Next:||From: Tom Lane||Date: 2007-04-22 17:53:26|
|Subject: Re: Redundant sub query triggers slow nested loop left join |
|Previous:||From: Ulrich Cech||Date: 2007-04-22 09:01:19|
|Subject: Re: Large objetcs performance|