Large # of Tables, Getting ready for the enterprise

From: "carl garland" <carlhgarland(at)hotmail(dot)com>
To: pgsql-hackers(at)postgresql(dot)org
Subject: Large # of Tables, Getting ready for the enterprise
Date: 2000-08-17 06:17:01
Message-ID: F138sxpTljBOh8s4KWi00002b1a@hotmail.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-hackers

As postgres becomes better and more in the spotlight there are a couple of
issues I think that the hacker group might want to address to better prepare
it for the enterprise and highend production systems. Currently postgres
will support an incredible amount of tables whereas Interbase only supports
64K, but the efficiency and performance of the pg backend quickly
degenerates after 1000 tables. I know that most people will think that
filesystem will be the bottleneck but as XFS nears completion the problem
will shift back to pg. It is my understanding that the system tables where
lookups on tables occur are always done sequentially and not using any more
optimized (btree etc) solution. I also think this may be applicable to the
toastable objects where large # of objects occur. I want to start to look
at the code to maybe help out but have a few questions:
1) When referencing a table is it only looked up once and then cached or
does a scan of the system table occur only once per session.
2) Which files should I look at in tree.
3) Any tips, suggestions, pitfalls I should remember.

Thanx for the pointers,
Carl Garland
________________________________________________________________________
Get Your Private, Free E-mail from MSN Hotmail at http://www.hotmail.com

Responses

Browse pgsql-hackers by date

  From Date Subject
Next Message Thomas Lockhart 2000-08-17 06:18:01 Re: regression test failure on initdb
Previous Message Thomas Lockhart 2000-08-17 04:41:18 Re: datetime_in()