On 5/25/06, kynn(at)panix(dot)com <kynn(at)panix(dot)com> wrote:
> Well, they're not my statistics; they're explain's. You mean there's
> a bug in explain? I agree that it makes no sense that the costs don't
> differ as much as one would expect, but you can see right there the
> numbers of rows for the two tables. At any rate, how would one go
> about finding an explanation for these strange stats?
Well, the query planner uses statistics to deduce the best plan
possible. Explain includes this statistical data in its output.
...for information about what it is all about.
The idea is that your statistics are probably not detailed enough
to help the planner. See ALTER TABLE SET STATISTICS to change
> More bewildering still (and infuriating as hell--because it means that
> all of my work for yesterday has been wasted) is that I can no longer
> reproduce the best query plan I posted earlier, even though the tables
> have not changed at all. (Hence I can't post the explain analyze for
> the best query plan, which Josh Drake asked for.) No matter what
> value I use for LIMIT, the query planner now insists on sequentially
> scanning huge_table and ignoring the available index. (If I turn off
> enable_seqscan, I get the second worst query plan I posted yesterday.)
> Anyway, I take it that there is no way to bypass the optimizer and
> instruct PostgreSQL exactly how one wants the search performed?
There is no way to bypass. But there are many ways to tune it.
Hmm, there is a probability (though statistics are more probable
go) that you're using some older version of PostgreSQL, and you're
hitting same problem as I did:
Tom has provided back then a patch, which fixed it:
...but I don't remember when it made into release.
In response to
pgsql-performance by date
|Next:||From: Merlin Moncure||Date: 2006-05-25 20:07:19|
|Subject: is it possible to make this faster?|
|Previous:||From: Michael Fuhr||Date: 2006-05-25 16:54:56|
|Subject: Re: lowering priority automatically at connection|