From: | "henk de wit" <henk53602(at)hotmail(dot)com> |
---|---|
To: | tgl(at)sss(dot)pgh(dot)pa(dot)us |
Cc: | pgsql-performance(at)postgresql(dot)org |
Subject: | Re: Redundant sub query triggers slow nested loop left join |
Date: | 2007-04-22 16:17:40 |
Message-ID: | BAY106-F7E7209A4ED3C87F1CAFC1F5540@phx.gbl |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-performance |
>Since you have two redundant tests, the selectivity is being
>double-counted, leading to a too-small rows estimate and a not very
>appropriate choice of join plan.
I see, thanks for the explanation. I did notice though that in the second
case, with 1 redundant test removed, the estimate is still low:
"Hash Left Join (cost=1449.99..2392.68 rows=2 width=714) (actual
time=24.257..25.292 rows=553 loops=1)"
In that case the prediction is 2 rows, which is only 1 row more than in the
previous case. Yet the plan is much better and performance improved
dramatically. Is there a reason/explanation for that?
>FWIW, CVS HEAD does get rid of the duplicate conditions for the common
>case of mergejoinable equality operators --- but it's not explicitly
>looking for duplicate conditions, rather this is falling out of a new
>method for making transitive equality deductions.
This sounds very interesting Tom. Is there some documentation somewhere
where I can read about this new method?
_________________________________________________________________
Live Search, for accurate results! http://www.live.nl
From | Date | Subject | |
---|---|---|---|
Next Message | Tom Lane | 2007-04-22 17:53:26 | Re: Redundant sub query triggers slow nested loop left join |
Previous Message | Ulrich Cech | 2007-04-22 09:01:19 | Re: Large objetcs performance |