I have a general question about designing databases for large data sets.
I was speaking with a colleague about an application we're preparing to
build. One of the application's tables will potentially contain 2 million
or more names, containing (at least) the fields first_name, last_name,
middle_name and prefix.
A common lookup the application will require is the full name, so prefix +
first_name + middle_name + last_name.
My friend's suggestion was to create a "lookup field" in the table itself,
which would contain a concatenation of these fields created during insert.
So, for each record, we'd having each individual field and then a
full_name field that would contain the combination of the ind. fields.
His argument is that this will make lookups in this manner extremely fast
I agree with his assertion, but get the feeling that this is sort of an
ugly design. Would a compound index on these fields really be less
Thanks for your help!
pgsql-general by date
|Next:||From: Karsten Hilbert||Date: 2004-05-30 18:01:39|
|Subject: Re: Use arrays to store multilanguage texts|
|Previous:||From: Jeff Eckermann||Date: 2004-05-30 17:19:23|
|Subject: Re: PostgreSQL delete the blank in the end of the String automatically. how can I avoid it?|