Re: [GENERAL] 50 MB Table

From: JB <jimbag(at)kw(dot)igs(dot)net>
To: "Culberson, Philip" <philip(dot)culberson(at)dat(dot)com>
Cc: Paul Condon <pecondon(at)quiknet(dot)com>, pgsql-general(at)postgreSQL(dot)org
Subject: Re: [GENERAL] 50 MB Table
Date: 2000-03-07 16:06:56
Message-ID: 38C52920.4BCB5D34@kw.igs.net
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-general

Thank you both for the suggestions. I did not realize that 'vacuum
analyse' was nesessary to get the indexes working. That alone cut the
search time almost in half. I'll do the ram bit as well.

cheers
jb

"Culberson, Philip" wrote:
>
> JB,
>
> The 20 seconds definitely sounds excessive. Have you done the following?
>
> 1) Run "vacuum analyze info"? If you have not, Postgres will not make use
> of any indices.
>
> 2) Run an explain plan on your query to see what Postgres thinks it's going
> to do?
>
> In another post, Howie suggested more RAM. If speed is a primary concern, I
> second his recommendation. Bump up your RAM and crank up the number of
> shared memory buffers. Here is an example of the command I use to start up
> Postgres:
>
> postmaster -i -B 12000 -d 2 -o "-F -S 4096 -s" >&! server.log &
>
> The "-B 12000" tells Postgres to set aside 12,000 8k buffers. With the size
> of your table, you should easily be able to fit the whole thing into memory.
> Keep in mind that your OS must have it's shared memory segment defined large
> enough to handle all the buffers... in this case, about 100 Meg.
>
> Phil Culberson
> DAT Services
>
> -----Original Message-----
> From: JB [mailto:jimbag(at)kw(dot)igs(dot)net]
> Sent: Monday, March 06, 2000 5:52 PM
> To: Paul Condon
> Cc: pgsql-general(at)postgreSQL(dot)org
> Subject: Re: [GENERAL] 50 MB Table
>
> Thanks for taking the time to reply. I think that I wasn't as clear as I
> could be. This table is normalized and as far as I understand, what I'm
> doing with it is not extraordinary. The schema is basically...
>
> CREATE TABLE info (
> lastname char(50),
> street_name char(50),
> street_number char(5),
> ... (a bunch of other stuff that works fine with '=')
> );
>
> CREATE INDEX nx_info1 ON info (lastname);
> CREATE INDEX nx_info2 ON info (street_name);
>
> The select is as simple as this in most cases...
>
> SELECT * FROM info WHERE street_name LIKE 'MAIN%';
>
> .,,the table about 50MB worth, about 70,000 records. I have an index on
> 'lastname' and 'street_name' and I need to search on each of these with
> 'LIKE'. So I was wondering about ways to speed this up. It's very slow.
> It takes about 20 seconds for the above query. I even uppercased all the
> names, hoping tht would help. I wondered if I'd used the wrong index
> type (btree), or if there were some flags that would help. Is there a
> way to bust the indexes out alpha on the first letter say, or some other
> such scheme. BTW the machine is RH6.1 with 128mb ram, 27 GB, P350, no X
> and no users (except me ;)

...etc...
--
If everything is coming your way then you're in the wrong lane.

In response to

Browse pgsql-general by date

  From Date Subject
Next Message Chris Jones 2000-03-07 16:14:30 Re: [GENERAL] Regular expressions syntax: is \ the escape character ?
Previous Message kaiq 2000-03-07 15:34:21 Re: [GENERAL] Accounting/inventory systems