Skip site navigation (1) Skip section navigation (2)

Re: [HACKERS] Schema Limitations ?

From: Chris Broussard <cbroussard(at)liquiddatainc(dot)com>
To: "Jim C(dot) Nasby" <jnasby(at)pervasive(dot)com>
Cc: pgsql-general(at)postgresql(dot)org
Subject: Re: [HACKERS] Schema Limitations ?
Date: 2006-05-31 01:07:11
Message-ID: (view raw, whole thread or download thread mbox)
Lists: pgsql-generalpgsql-hackers
Thanks Jim for the interesting information.

in theory what Is the best method (clustering software, or regular  
postgresql configuration ?) to spread/partition schemas between  
physical machines within a single database?  Is it even possible??
I have been using postgres for many years, and the vanilla type  
install / configuration has always suited my development & production  

currently, i have separate databases that i can obviously scale by  
having different database servers, and i have j2ee application  
servers that sits in front of postgres to manage/synchronize the  
relationships between the databases.  I'm thinking I can possibly  
gain efficiencies and simplify the application logic by collapsing  
the data into one database, and sharing the sharable data through a  
"shareable" schema, and each deployed application into it's own  

how are other people scaling out ?  just wondering what other people  
think is the best approach ?



On May 30, 2006, at 1:04 PM, Jim C. Nasby wrote:

> Moving to -general, where this belongs.
> On Sat, May 27, 2006 at 11:13:58PM -0500, Chris Broussard wrote:
>> Hello Hackers,
>> I have the following questions, after reading this FAQ (http://
>> are there statistics
>> around the max number of schemas in a database, max number of tables
>> In a schema, and max number of tables in a database (number that
>> spans schemas) ? Are the only limitations based on disk & ram/swap ?
> One hard limit you'll run into is OIDs, which max at either 2^31 or  
> 2^32
> (I can't remember offhand which it is). That would be number of  
> schemas,
> and number of total tables (there's a unique index on pg_class.oid).
> Actually, you'll be limited to 2 or 4 billion tables, indexes, and
> views.
> In reality, I suspect you'll become very unhappy with performance well
> before those numbers. Running a database with just 10000 tables can  
> be a
> bit tricky, though it's certainly doable.
>> Does anybody have a rough ballpark figures of the largest install
>> base on those questions?
>> I'm curious about these stats, because I'm debating on how best to
>> break up data, between schemas, physical separate databases, and the
>> combination of the two.
>> Thanks In Advanced.
>> Chris
>> ---------------------------(end of  
>> broadcast)---------------------------
>> TIP 2: Don't 'kill -9' the postmaster
> -- 
> Jim C. Nasby, Sr. Engineering Consultant      jnasby(at)pervasive(dot)com
> Pervasive Software    work: 512-231-6117
> vcard:       cell: 512-569-9461

In response to

pgsql-hackers by date

Next:From: Bruno Wolff IIIDate: 2006-05-31 01:53:45
Subject: Re: Looking for Postgres Developers to fix problem
Previous:From: Marc G. FournierDate: 2006-05-31 00:32:13
Subject: Re: anoncvs still slow

pgsql-general by date

Next:From: Tom LaneDate: 2006-05-31 03:41:39
Subject: Re: Which RPM for RH Linux ES 4? PGDB or RH?
Previous:From: Joshua D. DrakeDate: 2006-05-30 22:22:08
Subject: Re: SCSI disk: still the way to go?

Privacy Policy | About PostgreSQL
Copyright © 1996-2017 The PostgreSQL Global Development Group