We are working on a project which will have very big
tables (at least very big in my opinion). Initially we
estimate that in three tables of our database we will
have between 30 million to 60 million records (each
one). We think very soon we will have more than 100
million records in each of this tables. Also each
record will have on average 120 sub records. To avoid
even a bigger table size we are going to store these
sub records in main record. This sub records are
simple and are a pair of key and value. We are not
sure which one is better, to store as an array or a
serialized object (we are using java on server) or
other solution which we are not aware of it. There is
not a performance issue with this sub records as these
sub records will not participate in any query and the
only time we need them is when we have found the
record and are extracting it's complete details (or
So my question is,
As a database admin what would you do with this kind
of database and it's big tables? Is it a good idea to
try to break those tables to sub tables (dividing by
inheritance and say first letter of primary key), or
clustering is enough, Or any other solution?
And what is your opinion about sub records?
Thanks in advance.
Park yourself in front of a world of choices in alternative vehicles. Visit the Yahoo! Auto Green Center.
pgsql-admin by date
|Next:||From: Jayakumar_Mukundaraju||Date: 2007-06-22 13:28:11|
|Subject: Re: back up maintenance schedule|
|Previous:||From: Richard Huxton||Date: 2007-06-22 09:09:39|
|Subject: Re: [GENERAL] Can I backup/restore a database in a sql script?|