Re: [GENERAL] 50 MB Table

From: Paul Condon <pecondon(at)quiknet(dot)com>
To: JB <jimbag(at)kw(dot)igs(dot)net>
Cc: pgsql-general(at)postgreSQL(dot)org
Subject: Re: [GENERAL] 50 MB Table
Date: 2000-03-07 00:51:15
Message-ID: 38C45283.570BAF60@quiknet.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-general

JB wrote:

> I have a 50 MB +- table in postgres. The data is normalized so there's
> not much I can do about the size. The tuples are about 512 bytes so
> there's a pile of 'em. I need searching on of several fields, a couple
> in particular are text fields that needs 'LIKE'. The problem is, the
> thing is way too slow. So, I was wondering, before I go hunting for some
> other solution, could anyone here point me to some ways to (hand)
> optimize the searching in postgres? Different indexes, hashing and LIKE?
> I'm not sure where to go with this.
>
> The basic criteria are:
> - sizes of indexes, etc, is not an issue. There's lot's of room on the
> box.
> - the data is basically static so a read-only (if such a thing) is
> fine.
> - it needs to be FAST
>
> cheers
> jb
>
> ************

It sounds as if you have several different kinds of information encoded in
a single column using special words or letter combinations. This is a
violation of the ideal that data items should be "atomic." You should make
a catalog of all the things that you want to be able to say about each
tuple, and design a relational schema in which atomic assertion is given
its own column (attribute). Then you will be able to create indices on
each, and you won't have to use LIKE in your WHERE clauses.

Paul

In response to

Responses

Browse pgsql-general by date

  From Date Subject
Next Message JB 2000-03-07 01:52:20 Re: [GENERAL] 50 MB Table
Previous Message Ron Atkins 2000-03-07 00:29:55 Re: [GENERAL] DHCP and pg_hba.conf