From: | "John Wells" <jb(at)sourceillustrated(dot)com> |
---|---|
To: | pgsql-general(at)postgresql(dot)org |
Subject: | Large table search question |
Date: | 2004-05-30 17:55:01 |
Message-ID: | 39197.172.16.2.8.1085939701.squirrel@devsea.com |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-general |
Guys,
I have a general question about designing databases for large data sets.
I was speaking with a colleague about an application we're preparing to
build. One of the application's tables will potentially contain 2 million
or more names, containing (at least) the fields first_name, last_name,
middle_name and prefix.
A common lookup the application will require is the full name, so prefix +
first_name + middle_name + last_name.
My friend's suggestion was to create a "lookup field" in the table itself,
which would contain a concatenation of these fields created during insert.
So, for each record, we'd having each individual field and then a
full_name field that would contain the combination of the ind. fields.
His argument is that this will make lookups in this manner extremely fast
and efficient.
I agree with his assertion, but get the feeling that this is sort of an
ugly design. Would a compound index on these fields really be less
efficient?
Thanks for your help!
John
From | Date | Subject | |
---|---|---|---|
Next Message | Karsten Hilbert | 2004-05-30 18:01:39 | Re: Use arrays to store multilanguage texts |
Previous Message | Jeff Eckermann | 2004-05-30 17:19:23 | Re: PostgreSQL delete the blank in the end of the String automatically. how can I avoid it? |