Re: Thousands of tables versus on table?

From: Gregory Stark <stark(at)enterprisedb(dot)com>
To: "Thomas Andrews" <tandrews(at)soliantconsulting(dot)com>
Cc: "Mark Lewis" <mark(dot)lewis(at)mir3(dot)com>, <pgsql-performance(at)postgresql(dot)org>
Subject: Re: Thousands of tables versus on table?
Date: 2007-06-04 19:43:38
Message-ID: 877iqjmro5.fsf@oxford.xeocode.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-performance


"Thomas Andrews" <tandrews(at)soliantconsulting(dot)com> writes:

> I guess my real question is, does it ever make sense to create thousands of
> tables like this?

Sometimes. But usually it's not a good idea.

What you're proposing is basically partitioning, though you may not actually
need to put all the partitions together for your purposes. Partitioning's main
benefit is in the management of the data. You can drop and load partitions in
chunks rather than have to perform large operations on millions of records.

Postgres doesn't really get any faster by breaking the tables up like that. In
fact it probably gets slower as it has to look up which of the thousands of
tables you want to work with.

How often do you update or delete records and how many do you update or
delete? Once per day is a very low frequency for vacuuming a busy table, you
may be suffering from table bloat. But if you never delete or update records
then that's irrelevant.

Does reindexing or clustering the table make a marked difference?

I would suggest you post your schema and the results of "vacuum verbose".

--
Gregory Stark
EnterpriseDB http://www.enterprisedb.com

In response to

Responses

Browse pgsql-performance by date

  From Date Subject
Next Message Thomas Andrews 2007-06-04 19:57:41 Re: Thousands of tables versus on table?
Previous Message Markus Schiltknecht 2007-06-04 18:56:37 Re: dbt2 NOTPM numbers