> Well, I don't have time to do a thorough analysis right now, but in all
> the plans you've posted there are quite high values in the "Rows x" column
> (e.g. the 5727.5 value).
> That means a significant difference in estimated and actual row number,
> which may lead to poor choice of indexes etc. The planner may simply think
> the index is better due to imprecise statistics etc.
Yeah. The sq_ast_attr_val_attrid scan is a lot more selective than the
planner is guessing (3378 rows estimated vs an average of 15 actual),
and I think that is making the difference. If you look at the estimated
row counts and costs, it's expecting that adding the second index will
cut the number of heap fetches about 7x, hence saving somewhere around
4800 cost units in the heapscan step, more than it thinks the indexscan
will cost. But taking 15 row fetches down to 2 isn't nearly enough to
pay for the extra indexscan.
> Try to increase te statistics target for the columns, e.g.
> ALTER TABLE table ALTER COLUMN column SET STATISTICS integer
It's worth a try but I'm not sure how much it'll help. A different line
of attack is to play with the planner cost parameters. In particular,
reducing random_page_cost would reduce the estimated cost of the heap
fetches and thus discourage it from using the extra index. If you're
working with mostly-cached tables then this would probably improve
behavior overall, too.
regards, tom lane
In response to
pgsql-performance by date
|Next:||From: Vincenzo Romano||Date: 2010-07-29 17:08:52|
|Subject: On Scalability|
|Previous:||From: Michael Stone||Date: 2010-07-29 14:45:34|
|Subject: Re: Testing Sandforce SSD|