Problems with Large Databases

From: "carl garland" <carlhgarland(at)hotmail(dot)com>
To: pgsql-general(at)postgresql(dot)org
Subject: Problems with Large Databases
Date: 2000-06-03 12:18:11
Message-ID: 20000603161811.30904.qmail@hotmail.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-general

In a previous post Ed Loer wrote:

> Don't even think about 100000 separate tables in a database :-(. It's
>not so much that PG's own datastructures wouldn't cope, as that very
>few Unix filesystems can cope with 100000 files in a directory. You'd
>be killed on directory search times.

This didnt really answer the initial question of how long does it take to
locate a table in a large 1000000+ table db and where and when do these
lookups occur.

I understand the concern for directory search times but what if your
partition for the db files is under XFS or some other journaling fs that
allows for very quick search times on large directories. I also
saw that there may be concern over PGs own datastructures in that the
master tables that hold the table and index tables requires a seq
search for locating the tables. Why support a large # of tables in PG
if after a certain limit causes severe performance concerns. What if
your data model requires more 1,000,000 tables?
________________________________________________________________________
Get Your Private, Free E-mail from MSN Hotmail at http://www.hotmail.com

Responses

Browse pgsql-general by date

  From Date Subject
Next Message Giles Lean 2000-06-03 12:59:34 Re: Industrial-Strength Logging
Previous Message Kees Kuip 2000-06-03 12:05:08 Re: Anybody got Postgress Binaries working on NT ?