using schema's for data separation

From: snacktime <snacktime(at)gmail(dot)com>
To: "pgsql-general(at)postgresql(dot)org" <pgsql-general(at)postgresql(dot)org>
Subject: using schema's for data separation
Date: 2006-09-29 05:59:00
Message-ID: 1f060c4c0609282259i225972feu32dd0ccea318dbfc@mail.gmail.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-general

I'm re evaluating a few design choices I made a while back, and one
that keeps coming to the forefront is data separation. We store
sensitive information for clients. A database for each client isn't
really workable, or at least I've never though of a way to make it
workable, as we have several thousand clients and the databases all
have to be accessed through a limited number of web applications where
performance is important and things like persistant connections are a
must. I've always been paranoid about a programmer error in an
application resulting in data from multiple clients getting mixed
together. Right now we create a schema for each client, with each
schema having the same tables. The connections to the database are
from an unprivileged user, and everything goes through functions that
run at the necessary privileges. We us set_search_path to
public,user. User data is in schema user and the functions are in the
public schema. Every table has a client_id column.

This has worked well so far but it's a real pain to manage and as we
ramp up I'm not sure it's going to scale that well. So anyways my
questions is this. Am I being too paranoid about putting all the data
into one set of tables in a common schema? For thousands of clients
what would you do?

Chris

Responses

Browse pgsql-general by date

  From Date Subject
Next Message Bo Lorentsen 2006-09-29 05:59:51 Re: Replication and PITR
Previous Message Tom Lane 2006-09-29 05:49:05 Re: Do non-sequential primary keys slow performance significantly??