Skip site navigation (1) Skip section navigation (2)

Re: explain plan

From: Francisco Reyes <fran(at)reyes(dot)somos(dot)net>
To: rudy <rudy(at)heymax(dot)com>
Cc: <pgsql-novice(at)postgresql(dot)org>
Subject: Re: explain plan
Date: 2001-02-02 05:07:24
Message-ID: (view raw, whole thread or download thread mbox)
Lists: pgsql-novice
On Tue, 30 Jan 2001, rudy wrote:

> skyy=# vacuum analyze article;
> skyy=# explain select id_article from article where id_article = 21;
> Seq Scan on article  (cost=0.00..1.61 rows=1 width=8)
> skyy=#
> This table has 20,000 records. What am I doing wrong? Why doesn't it use
> the Index I created? Is there something I need to enable, why wouldn't
> it choose an index over a seq scan with more than 20,000 rows to scan?

I am new to PostgreSQL, but have been doing databases for a while so I am
going to give feedback based on previous experiences with other

Depending on how big each row is the optimizer may decide that the
overhead of going to the index may not be worth it compared to what it
would "cost" just reading the whole file.

You also need to take into account the cardinality of the field in
question. (familiar with the term?)

For example if when you did vacuum analyze the database notices that the
field in question has a high number of different values and it believes
that your request would return a large number of them, then going to the
index may indeed be slower.

How many rows does the query return?

In response to

pgsql-novice by date

Next:From: Francisco ReyesDate: 2001-02-02 05:13:39
Subject: Re: explain plan
Previous:From: Kwan Lai SumDate: 2001-02-02 03:24:13
Subject: syslog.conf

Privacy Policy | About PostgreSQL
Copyright © 1996-2017 The PostgreSQL Global Development Group