From: | "Martin Dillard" <martin(at)edusoftinc(dot)com> |
---|---|
To: | <pgsql-general(at)postgresql(dot)org> |
Subject: | scaling a database |
Date: | 2002-02-25 21:32:52 |
Message-ID: | 13aa01c1be44$0d264530$0301a8c0@market |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-general |
This is a fairly standard DBA-type question but I can't seem to find the answer in any of the Postgres mailing list archives or any other DBA-centric site out there.
We are growing a database on PostgreSQL/Linux fairly quickly and are concerned about reaching resource limits. More specifically, we currently have about 20,000 "users" in the system and one table contains an average of 50 rows per user (around 1 million rows). This table summarizes data for the user and contains 10 integer colums, 2 varying character columns up to 50, and 2 boolean columns. Fortunately, this data is read-only so the table is indexed on every relevant column. Our entire database is around 800 MB at this point.
What I am wondering is will we reach some kind of limit if we wanted to grow the database to accomodate 1 million or even 10 million users? Right now we have a dedicated dual processor server with 1 GB of RAM that PostgreSQL runs exclusively on. What should we expect to cause problems as the database size grows? What are the standard solutions? Would you generally throw more hardware at the problem? Would you partition the database into multiple instances each with a subset of the users and their data? Any information that someone can point me to would be greatly appreciated.
__________________________
Martin Dillard
Senior Engineer
Edusoft
http://www.edusoftinc.com
Direct: (415) 341-8562
Main: (415) 863-6315
E-mail: martin(at)edusoftinc(dot)com
From | Date | Subject | |
---|---|---|---|
Next Message | Thomas T. Thai | 2002-02-25 21:51:48 | Re: help with getting index scan |
Previous Message | Jean-Luc Lachance | 2002-02-25 21:28:02 | Re: help with getting index scan |