database with 1000000 rows is very slow

From: David Celjuska <dcsoft(at)dcsoft(dot)sk>
To: pgsql-novice(at)postgresql(dot)org, pgsql-sql(at)postgresql(dot)org
Subject: database with 1000000 rows is very slow
Date: 2000-03-05 20:11:02
Message-ID: 38C2BF56.ACC6FA05@dcsoft.sk
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-sql

Hallo All!

I have database with follow structure:

CREATE TABLE "article" (
"id" character varying(15) NOT NULL,
"obj_kod" character varying(15),
"popis" character varying(80),
"net_price" float4,
"our_price" float4,
"quantity" int2,
"group1" character varying(40) DEFAULT 'ine',
"group2" character varying(40),
"pic1" character varying(10) DEFAULT 'noname.jpg',
"pic2" character varying(10) DEFAULT 'noname.jpg',
"alt1" character varying(15),
"alt2" character varying(15),
"zisk" int2);

CREATE UNIQUE INDEX "article_pkey" on "article" using btree ( "id"
"varchar_ops" );

and with 1000000 rows. Postgres deamon run on 2xPentiumII 330Mhz with
SCSI disk where is
this database store. But I think that select * from article where id
like 'something%' is very slow
(some minutes) and query as: select * from article where id='something'
is very slow too.
I don't know where is a problem a I would like optimalise this, but how
can I do it?

When I use hash except btree, query as: select * from article where
id='something' is fast but
select * from article where id='something%' is very slow.

Can I index some columns externaly? For example: psql index database
table col.
Or postgresql make indexes automaticly?

How can I see that postgres use/or no use index on some query? It is
possible?

Thank you every reply,
Davy!

Browse pgsql-sql by date

  From Date Subject
Next Message David Celjuska 2000-03-05 20:21:53 database with 1000000 rows is very slow
Previous Message Oleg Bartunov 2000-03-05 20:00:39 statistics using SQL