From: | Chris Browne <cbbrowne(at)acm(dot)org> |
---|---|
To: | pgsql-general(at)postgresql(dot)org |
Subject: | Re: efficient storing of urls |
Date: | 2004-02-28 00:18:00 |
Message-ID: | 6065dsoy4n.fsf@dev6.int.libertyrms.info |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-general |
shannyconsulting(at)earthlink(dot)net (Sean Shanny) writes:
> Can you give an example of a query that has gotten slower due to the
> increasing size of the urls table with an explain analyze?
There's a "known issue" in that URL strings commonly contain the prefix:
What you get, as a result, is that there's very little uniqueness
there, and indices are known to suffer.
There was a report last week that essentially putting the URLs in
backwards, and having a functional index on the backwards form, led to
greatly improved selectivity of the index.
The approach being suggested here looks more like that of the "prefix
splitting" typical to Patricia Tries; that's what the New Oxford
English Dictionary project used for building efficient text search
indices. It ought to be pretty quick, but pretty expensive in terms
of the complexity that gets added in.
I suspect that doing the "reverse the URL" trick would be a cheaper
"fix."
--
"cbbrowne","@","ntlug.org"
http://www.ntlug.org/~cbbrowne/linuxxian.html
"This .signature is shareware. Send in $20 for the fully registered
version..."
From | Date | Subject | |
---|---|---|---|
Next Message | Bill Moran | 2004-02-28 00:47:53 | Regarding BITs vs. INTs |
Previous Message | Shane Wegner | 2004-02-28 00:00:09 | Re: efficient storing of urls |