Re: reasonable limit to number of schemas in a database?

From: Ben <bench(at)silentmedia(dot)com>
To: Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us>
Cc: pgsql-general(at)postgresql(dot)org
Subject: Re: reasonable limit to number of schemas in a database?
Date: 2007-04-25 15:50:11
Message-ID: 52734F01-6705-43C9-A49D-266A44042237@silentmedia.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-general

I currently am using a normal system like the one you suggest, in
which every user puts their data into a single schema, and uses keys
to keep things separate. The problem comes in database upgrades.
They're not common, but as I ramp up the number of users, it becomes
increasingly infeasible to upgrade everybody at once. But everybody
using the same schema has to be on the same schema version.

Each session will probably touch most if not all of the tables
eventually, but will only touch a dozen or so from each schema with
any regularity.

Is the 300k files/directory my only real bottleneck, or should I
worry about catalog cache and lock table space too? How would I
overcome those last two?

On Apr 24, 2007, at 10:14 PM, Tom Lane wrote:

> The number of schemas doesn't scare me so much as the number of
> tables.
> Are you using a filesystem that can cope gracefully with 300K files in
> one directory? How many of these tables do you anticipate any one
> session touching? (That last translates to catalog cache and lock
> table
> space...)
>
> Generally, when someone proposes a scheme like this, they are thinking
> that N identical tables are somehow better than one table with an
> additional key column. The latter is usually the better design,
> unless
> you have special requirements you didn't mention.
>
> regards, tom lane

In response to

Browse pgsql-general by date

  From Date Subject
Next Message Steve Crawford 2007-04-25 15:55:12 Vacuum-full very slow
Previous Message Tilmann Singer 2007-04-25 15:47:19 Re: Audit-trail engine: getting the application's layer user_id