Re: Large Tables/clustering/terrible performance ofPostgresql

From: "Jeffrey W(dot) Baker" <jwbaker(at)acm(dot)org>
To: Michael McAlpine <mikem(at)vis(dot)oregonian(dot)com>
Cc: <pgsql-general(at)postgresql(dot)org>
Subject: Re: Large Tables/clustering/terrible performance ofPostgresql
Date: 2001-12-31 21:51:58
Message-ID: Pine.LNX.4.33.0112311349590.5617-100000@windmill.gghcwest.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-general

On Mon, 31 Dec 2001, Michael McAlpine wrote:

> Thanks for the reply.
>
> Explain results:
>
> NOTICE: QUERY PLAN:
>
> Seq Scan on table1 (cost=0.00..163277.83 rows=1 width=300)
>
> EXPLAIN

Welp, that's your problem I suspect! Postgres is going to read every
single record in your table to find the result. You need an index, and
Postgres needs to use it. If you don't have an index, add one:

create index table1_name_idx on table1(name);

After you have an index, Postgres needs to learn to use it:

vacuum verbose analyze table1;

Then re-run explain and let us know how things shake out.

-jwb

Browse pgsql-general by date

  From Date Subject
Next Message Jeffrey W. Baker 2001-12-31 22:04:03 7.2 changes to varchar truncation
Previous Message Michael McAlpine 2001-12-31 21:48:40 Re: Large Tables/clustering/terrible performance