| From: | Phil Endecott <spam_from_postgresql_general(at)chezphil(dot)org> | 
|---|---|
| To: | pgsql-general(at)postgresql(dot)org | 
| Subject: | Scalability with large numbers of tables | 
| Date: | 2005-02-20 13:24:49 | 
| Message-ID: | 42188FA1.5090108@chezphil.org | 
| Views: | Whole Thread | Raw Message | Download mbox | Resend email | 
| Thread: | |
| Lists: | pgsql-general | 
Dear Postgresql experts,
I have a single database with one schema per user.  Each user has a 
handful of tables, but there are lots of users, so in total the database 
has thousands of tables.
I'm a bit concerned about scalability as this continues to grow.  For 
example I find that tab-completion in psql is now unusably slow; if 
there is anything more important where the algorithmic complexity is the 
same then it will be causing a problem.  There are 42,000 files in the 
database directory.  This is enough that, with a "traditional" unix 
filesystem like ext2/3, kernel operations on directories take a 
significant time.  (In other applications I've generally used a guide of 
100-1000 files per directory before adding extra layers, but I don't 
know how valid this is.)
I'm interested to know if anyone has any experiences to share with 
similar large numbers of tables.  Should I worry about it?  I don't want 
to wait until something breaks badly if I need architectural changes. 
Presumably tablespaces could be used to avoid the 
too-many-files-per-directory issue, though I've not moved to 8.0 yet.
Thanks
Phil.
| From | Date | Subject | |
|---|---|---|---|
| Next Message | Magnus Hagander | 2005-02-20 13:55:58 | Re: PGSQL 8.0.1 Win 2K Installation Problem | 
| Previous Message | Sim Zacks | 2005-02-20 09:34:17 | error handling codes |