Architecting a database

From: tony(at)exquisiteimages(dot)com
To: pgsql-performance(at)postgresql(dot)org
Subject: Architecting a database
Date: 2010-06-25 19:36:07
Message-ID: 4f262829ce5efd2bc0296ef8a081cc41.squirrel@www.exquisiteimages.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-performance

I am in the process of moving a system that has been built around FoxPro
tables for the last 18 years into a PostgreSQL based system.

Over time I came up with decent strategies for making the FoxPro tables
work well with the workload that was placed on them, but we are getting to
the point that the locking mechanisms are causing problems when some of
the more used tables are being written to.

With the FoxPro tables I had one directory that contained the tables that
had global data that was common to all clients. Things like documents that
had been received and logged, checks that had been cut, etc. Then each
client had his own directory which housed tables that had information
relating to that specific client. Setting things up like this kept me from
having any tables that were too terribly large so record addition and
index creation were not very time consuming.

I am wondering how I should architect this in PostgreSQL. Should I follow
a similar strategy and have a separate database for each client and one
database that contains the global data? With the dBase and ISAM tables I
have a good idea of how to handle them since I have been working with them
since dBASE originally came out. With the PostgreSQL type tables I am not
so certain how the data is arranged within the one file. Does having the
data all in one database allow PostgreSQL to better utilize indexes and
caches or does having a number of smaller databases provide performance
increases? In case it is important, there are 2000 clients involved, so
that would be 2000 databases if I followed my current FoxPro related
structure. Of course, I suppose it is always possible to combine a number
of groups into a database if the number of databases is an issue.

Tables within the client specific databases are generally name and address
information as well as tables for 10 different types of accounts which
require different structures and those tables hold anywhere from 10,000
transactions a piece for some smaller groups and 1 million for larger
groups. I believe we have read to write ratio of about 1 to 15.

Thanks for any input.

Responses

Browse pgsql-performance by date

  From Date Subject
Next Message Tom Molesworth 2010-06-25 20:18:52 Re: Re: sudden spurt in swap utilization (was:cpu bound postgresql setup.)
Previous Message Greg Smith 2010-06-25 19:03:30 Re: pgbench results on a new server