From: | Michael Brusser <michael(at)synchronicity(dot)com> |
---|---|
To: | Pgsql-Hackers <pgsql-hackers(at)postgresql(dot)org> |
Subject: | Slow query - index not used |
Date: | 2004-01-20 21:45:49 |
Message-ID: | DEEIJKLFNJGBEMBLBAHCOEGKECAA.michael@synchronicity.com |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-hackers |
I'm running Postgres v.7.3.4 on Solaris.
Here's a bare-bone version of the query that runs unreasonably slow:
SELECT count (*) from note_links_aux WHERE nt_note_name='KnowledgeBase';
Query Plan shows that db runs through the entire table:
Aggregate (cost=1983.58..1983.58 rows=1 width=0)
-> Seq Scan on note_links_aux (cost=0.00..1970.18 rows=5360 width=0)
Filter: (nt_note_name = 'KnowledgeBase'::character varying)
even though field nt_note_name is indexed:
Indexes: nla_nl_id_fk_i btree (nl_id),
nla_nt_note_name_fk_i btree (nt_note_name), ... ...
Running the same query against another varchar field on this table
I see that index is being used:
-> Index Scan using ...
This query runs much faster.
Analyzing table did not help.
Using "DEFAULT_STATISTICS_TARGET = 100" made db use index for
field = 'abc', but field = 'xyz' still results in the table scan.
Also 'rows' number reported in the plan is way off.
What can be done here?
Thanks,
Mike.
From | Date | Subject | |
---|---|---|---|
Next Message | Peter Eisentraut | 2004-01-20 22:26:53 | Re: Allow backend to output result sets in XML |
Previous Message | Tom Lane | 2004-01-20 19:55:05 | Re: Regarding development and the submittal of patches |