Re: Speed of locating tables

From: "carl garland" <carlhgarland(at)hotmail(dot)com>
To: pgsql-general(at)postgresql(dot)org
Subject: Re: Speed of locating tables
Date: 2000-05-30 08:04:57
Message-ID: 20000530120457.48493.qmail@hotmail.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-general

> Don't even think about 100000 separate tables in a database :-(. It's
>not so much that PG's own datastructures wouldn't cope, as that very
>few Unix filesystems can cope with 100000 files in a directory. You'd
>be killed on directory search times.

This doesnt really answer the initial question of how long does it take to
locate a table in a large 1000000+ table db and where and when do these
lookups occur.

I understand the concern for directory search times but what if your
partition for the db files is under XFS or some other journaling fs that
allows for very quick search times on large directories. I also
saw that there may be concern over PGs own datastructures in that the
master tables that hold the table and index tables requires a seq
search for locating the tables. Why support a large # of tables in PG
if after a certain limit causes severe performance concerns. What if
your data model requires more 1,000,000 tables?
________________________________________________________________________
Get Your Private, Free E-mail from MSN Hotmail at http://www.hotmail.com

Responses

Browse pgsql-general by date

  From Date Subject
Next Message Rob S. 2000-05-30 08:33:04 Data type for serial during constraint?
Previous Message Stuart Grimshaw 2000-05-30 07:04:50 Re: ODBC