Very Large Table Partitioning

From: Majid Azimi <majid(dot)merkava(at)gmail(dot)com>
To: PostgreSQL - Novice <pgsql-novice(at)postgresql(dot)org>
Subject: Very Large Table Partitioning
Date: 2010-12-17 18:58:30
Message-ID: 4D0BB2D6.1040404@gmail.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-novice

Hi guys.

here is our problem:

We have a table that if we want to save all user's records in it, we
have a very large table. maybe 10TB+
so we are deciding to use table partitioning. But again we have problem
here:

if we decide to partition table per user we have lots of tables (maybe
more than 100000+) with only 10000 records each.
is this a good idea? is there any limit for number of tables?

The table structure is not in a way that we can partition in a better
way. is this a good idea to add a column like "date inserted" and
partition per year for example?

Responses

Browse pgsql-novice by date

  From Date Subject
Next Message Tom Lane 2010-12-17 19:38:49 Re: Very Large Table Partitioning
Previous Message Mladen Gogala 2010-12-17 17:28:12 Re: Pgstatindex and leaf fragmentation.