Skip site navigation (1) Skip section navigation (2)

Large table search question

From: "John Wells" <jb(at)sourceillustrated(dot)com>
To: pgsql-general(at)postgresql(dot)org
Subject: Large table search question
Date: 2004-05-30 17:55:01
Message-ID: (view raw or whole thread)
Lists: pgsql-general

I have a general question about designing databases for large data sets.

I was speaking with a colleague about an application we're preparing to
build.  One of the application's tables will potentially contain 2 million
or more names, containing (at least) the fields first_name, last_name,
middle_name and prefix.

A common lookup the application will require is the full name, so prefix +
first_name + middle_name + last_name.

My friend's suggestion was to create a "lookup field" in the table itself,
which would contain a concatenation of these fields created during insert.
 So, for each record, we'd having each individual field and then a
full_name field that would contain the combination of the ind. fields. 
His argument is that this will make lookups in this manner extremely fast
and efficient.

I agree with his assertion, but get the feeling that this is sort of an
ugly design.  Would a compound index on these fields really be less

Thanks for your help!



pgsql-general by date

Next:From: Karsten HilbertDate: 2004-05-30 18:01:39
Subject: Re: Use arrays to store multilanguage texts
Previous:From: Jeff EckermannDate: 2004-05-30 17:19:23
Subject: Re: PostgreSQL delete the blank in the end of the String automatically. how can I avoid it?

Privacy Policy | About PostgreSQL
Copyright © 1996-2015 The PostgreSQL Global Development Group