From: | Alvaro Herrera <alvherre(at)commandprompt(dot)com> |
---|---|
To: | Henrik <henke(at)mac(dot)se> |
Cc: | Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us>, pgsql-performance(at)postgresql(dot)org |
Subject: | Re: Query taking too long. Problem reading explain output. |
Date: | 2007-10-04 23:43:31 |
Message-ID: | 20071004234331.GG28896@alvh.no-ip.org |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-performance |
Henrik wrote:
> Correct. I changed the statistics to 500 in tbl_file.file_name and now the
> statistics is better. But now my big seq scan on tbl_file_Structure back
> and I don't know why.
Hmm, I think the problem here is that it needs to fetch ~200000 tuples
from tbl_file_structure one way or the other. When it misestimated the
tuples from tbl_file it thought it would only need to do the indexscan
in tbl_file_structure a few times, but now it realizes that it needs to
do it several thousands of times and it considers the seqscan to be
cheaper.
Perhaps you would benefit from a higher effective_cache_size or a lower
random_page_cost (or both).
I think this is a problem in the optimizer: it doesn't correctly take
into account the fact that the upper pages of the index are most likely
to be cached. This has been discussed a lot of times but it's not a
simple problem to fix.
--
Alvaro Herrera http://www.amazon.com/gp/registry/CTMLCN8V17R4
Este mail se entrega garantizadamente 100% libre de sarcasmo.
From | Date | Subject | |
---|---|---|---|
Next Message | Chris | 2007-10-05 05:19:23 | Re: Partitioning in postgres - basic question |
Previous Message | Ben | 2007-10-04 21:16:49 | Re: quickly getting the top N rows |