From: | Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us> |
---|---|
To: | Michael Ruf <mrf(at)inxmail(dot)de> |
Cc: | pgsql-performance(at)postgresql(dot)org |
Subject: | Re: Optimizer use of index slows down query by factor |
Date: | 2009-12-24 15:46:25 |
Message-ID: | 381.1261669585@sss.pgh.pa.us |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-performance |
Michael Ruf <mrf(at)inxmail(dot)de> writes:
> we experience some strange performance problems, we've already found a
> workaround for us, but are curious if it's a known problem of the optimizer.
I think you need to see about getting this rowcount estimate to be more
accurate:
> -> Index Scan using idx_link_1 on link
> (cost=0.00..680.51 rows=13477 width=26) (actual time=5.707..12.043
> rows=126 loops=1)
> Index Cond: (task_id = 1556)
> Filter: (((deletable IS NULL) OR (NOT
> deletable)) AND ((link_type = 8) OR (link_type = 9)))
If it realized there'd be only 126 rows out of that scan, it'd probably
have gone for a nestloop join against the big table, which I think would
be noticeably faster than either of the plans you show here.
You already did crank up default_statistics_target, so I'm not sure if
raising it further would help any. What I'd suggest is trying to avoid
using non-independent AND/OR conditions. For instance recasting the
first OR as just "deletable is not true" would probably result in a
better estimate. The size of the error seems to be more than that would
account for though, so I suspect that the deletable and link_type
conditions are interdependent. Is it practical to recast your data
representation to avoid that?
regards, tom lane
From | Date | Subject | |
---|---|---|---|
Next Message | Shrirang Chitnis | 2009-12-24 15:46:49 | Re: Performance with partitions/inheritance and multiple tables |
Previous Message | Gaël Le Mignot | 2009-12-24 15:44:14 | Re: SATA drives performance |