Hi all. I'm not subscribed to the list but I thought I'd drop this in
here as a suggestion. There is a thread in GENERAL called '50MB Table'
for those interested. The summary is, I have a 50MB +- table with about
70,000 records which on wich I was doing LIKE selects. It was taking
approx 20 secs to complete the search. The table is something like...
CREATE TABLE info (
CREATE INDEX nx_info1 ON info (lastname);
CREATE INDEX nx_info2 ON info (street_name);
on which I was doing...
SELECT * FROM info WHERE street_name LIKE 'MAIN%';
...this would take about 20 secs to complete. Because the wildness only
happens at the end of the search string, I changed the query to...
SELECT * FROM info WHERE substring( street_name from 1 to 4 ) = 'MAIN';
...this takes under 2 secs. I wrote a piece of code in python to do this
automatically for me but it seems to me that the parser/optimizer could
take a look at this case and re-write the query with the '=' instead of
the 'LIKE'. I've looked through the 'C' code to see where this could
happen but it is too thick for me to sort out with my schedule, so I
thought I'd make the suggestion here.
If everything is coming your way then you're in the wrong lane.
pgsql-hackers by date
|Next:||From: Bruce Momjian||Date: 2000-03-07 23:53:41|
|Subject: Re: [BUGS] grant/revoke bug with delete/update|
|Previous:||From: Bruce Momjian||Date: 2000-03-07 23:48:55|
|Subject: Re: [HACKERS] xlog.c.patch for cygwin port.|