Re: 15,000 tables

From: David Lang <dlang(at)invendra(dot)net>
To: "Craig A(dot) James" <cjames(at)modgraph-usa(dot)com>
Cc: pgsql-performance(at)postgresql(dot)org
Subject: Re: 15,000 tables
Date: 2005-12-02 07:46:55
Message-ID: Pine.LNX.4.62.0512012341100.2807@qnivq.ynat.uz
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-performance

On Thu, 1 Dec 2005, Craig A. James wrote:

> So say I need 10,000 tables, but I can create tablespaces. Wouldn't that
> solve the performance problem caused by Linux's (or ext2/3's) problems with
> large directories?
>
> For example, if each user creates (say) 10 tables, and I have 1000 users, I
> could create 100 tablespaces, and assign groups of 10 users to each
> tablespace. This would limit each tablespace to 100 tables, and keep the
> ext2/3 file-system directories manageable.
>
> Would this work? Would there be other problems?

This would definantly help, however there's still the question of how
large the tables get, and how many total files are needed to hold the 100
tables.

you still have the problem of having to seek around to deal with all these
different files (and tablespaces just spread them further apart), you
can't solve this, but a large write-back journal (as opposed to
metadata-only) would mask the problem.

it would be a trade-off, you would end up writing all your data twice, so
the throughput would be lower, but since the data is safe as soon as it
hits the journal the latency for any one request would be lower, which
would allow the system to use the CPU more and overlap it with your
seeking.

David Lang

In response to

Browse pgsql-performance by date

  From Date Subject
Next Message David Lang 2005-12-02 07:49:48 Re: Database restore speed
Previous Message David Lang 2005-12-02 07:35:17 Re: Open request for benchmarking input (fwd)