"Hell, Robert" <Robert(dot)Hell(at)fabasoft(dot)com> writes:
> bad plan (sometimes with statistcs target 100, seconds after the good plan was chosen) - about 2 minutes: http://explain.depesz.com/s/gcr
> good plan (most of the time with statistcs target 100) - about one second: http://explain.depesz.com/s/HX
> very good plan (with statistics target 10) - about 15 ms: http://explain.depesz.com/s/qMc
> What's the reason for that?
Garbage in, garbage out :-(. When you've got rowcount estimates that
are off by a couple orders of magnitude, it's unsurprising that you get
bad plan choices. In this case it appears that the "bad" and "good"
plans have just about the same estimated cost. I'm guessing that the
underlying statistics change a bit due to autovacuum activity, causing
the plan choice to flip unexpectedly.
The real fix would be to get the rowcount estimates more in line with
reality. I think the main problem is that in cases like
-> Index Scan using ind_atobjval on atobjval t6 (cost=0.00..12.04 rows=1 width=12) (actual time=0.032..0.953 rows=775 loops=1)
Index Cond: ((attrid = 285774255985991::bigint) AND (objval = 285774255985589::bigint))
the planner is supposing that the two conditions are independent when
they are not. Is there any way you can refactor the data representation
to remove the hidden redundancy?
regards, tom lane
In response to
pgsql-performance by date
|Next:||From: solAris23||Date: 2009-09-18 16:06:44|
|Subject: Index row requires 9324 bytes maximum size is 8191|
|Previous:||From: Michael Korbakov||Date: 2009-09-18 14:21:29|
|Subject: Planner question - wrong row count estimation|