Re: [HACKERS] Schema Limitations ?

From: Chris Broussard <cbroussard(at)liquiddatainc(dot)com>
To: "Jim C(dot) Nasby" <jnasby(at)pervasive(dot)com>
Cc: pgsql-general(at)postgresql(dot)org
Subject: Re: [HACKERS] Schema Limitations ?
Date: 2006-05-31 01:07:11
Message-ID: 7895F93E-72CE-4E14-BA93-FC2D0E8F7A09@liquiddatainc.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-general pgsql-hackers


Thanks Jim for the interesting information.

in theory what Is the best method (clustering software, or regular
postgresql configuration ?) to spread/partition schemas between
physical machines within a single database? Is it even possible??
I have been using postgres for many years, and the vanilla type
install / configuration has always suited my development & production
needs...

currently, i have separate databases that i can obviously scale by
having different database servers, and i have j2ee application
servers that sits in front of postgres to manage/synchronize the
relationships between the databases. I'm thinking I can possibly
gain efficiencies and simplify the application logic by collapsing
the data into one database, and sharing the sharable data through a
"shareable" schema, and each deployed application into it's own
schema...

how are other people scaling out ? just wondering what other people
think is the best approach ?

thanks,

Chris

On May 30, 2006, at 1:04 PM, Jim C. Nasby wrote:

> Moving to -general, where this belongs.
>
> On Sat, May 27, 2006 at 11:13:58PM -0500, Chris Broussard wrote:
>> Hello Hackers,
>>
>> I have the following questions, after reading this FAQ (http://
>> www.postgresql.org/docs/faqs.FAQ.html#item4.4) are there statistics
>> around the max number of schemas in a database, max number of tables
>> In a schema, and max number of tables in a database (number that
>> spans schemas) ? Are the only limitations based on disk & ram/swap ?
>
> One hard limit you'll run into is OIDs, which max at either 2^31 or
> 2^32
> (I can't remember offhand which it is). That would be number of
> schemas,
> and number of total tables (there's a unique index on pg_class.oid).
> Actually, you'll be limited to 2 or 4 billion tables, indexes, and
> views.
>
> In reality, I suspect you'll become very unhappy with performance well
> before those numbers. Running a database with just 10000 tables can
> be a
> bit tricky, though it's certainly doable.
>
>> Does anybody have a rough ballpark figures of the largest install
>> base on those questions?
>>
>> I'm curious about these stats, because I'm debating on how best to
>> break up data, between schemas, physical separate databases, and the
>> combination of the two.
>>
>> Thanks In Advanced.
>>
>> Chris
>>
>> ---------------------------(end of
>> broadcast)---------------------------
>> TIP 2: Don't 'kill -9' the postmaster
>>
>
> --
> Jim C. Nasby, Sr. Engineering Consultant jnasby(at)pervasive(dot)com
> Pervasive Software http://pervasive.com work: 512-231-6117
> vcard: http://jim.nasby.net/pervasive.vcf cell: 512-569-9461

In response to

Browse pgsql-general by date

  From Date Subject
Next Message Tom Lane 2006-05-31 03:41:39 Re: Which RPM for RH Linux ES 4? PGDB or RH?
Previous Message Joshua D. Drake 2006-05-30 22:22:08 Re: SCSI disk: still the way to go?

Browse pgsql-hackers by date

  From Date Subject
Next Message Bruno Wolff III 2006-05-31 01:53:45 Re: Looking for Postgres Developers to fix problem
Previous Message Marc G. Fournier 2006-05-31 00:32:13 Re: anoncvs still slow