On 17 Mar 2006 11:09:50 +0100, Guillaume Cottenceau
> Reading the documentation and postgresql list archives, I have
> run ANALYZE right before my tests, I have increased the
> statistics target to 50 for the considered table; my problem is
> that the index scan cost reported by EXPLAIN seems to be around
> 12.7 times higher that it should, a figure I suppose incompatible
> (too large) for just random_page_cost and effective_cache_size
It's not surprising you have a high cost for an index scan which is
planned to return and returns so much rows. I really don't think the
planner does something wrong on this one.
AFAIK, increasing the statistics target won't do anything to reduce
the cost as the planner estimation for the number of returned rows is
already really accurate and probably can't be better.
> Of course real queries use smaller date ranges.
What about providing us the respective plans for your real queries?
And in a real case. It's a bad idea to compare index scan and seqscan
when your data have to be loaded in RAM.
Before doing so create an index on the date column to have the most
effective index possible.
> - I then tried to tweak random_page_cost and effective_cache_size
> following advices from documentation:
> SET random_page_cost = 2;
random_page_cost is the way to go for this sort of thing but I don't
think it's a good idea to have it too low globally and I'm still
thinking the problem is that your test case is not accurate.
In response to
pgsql-performance by date
|Next:||From: Antoine||Date: 2006-03-18 12:01:24|
|Subject: n00b autovacuum question|
|Previous:||From: Oleg Bartunov||Date: 2006-03-18 08:50:48|
|Subject: Re: Help optimizing a slow index scan|