Operational performance: one big table versus many smaller tables

From: David Wall <d(dot)wall(at)computer(dot)org>
To: pgsql-general(at)postgresql(dot)org
Subject: Operational performance: one big table versus many smaller tables
Date: 2009-10-26 16:46:45
Message-ID: 4AE5D275.80405@computer.org
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-general

If I have various record types that are "one up" records that are
structurally similar (same columns) and are mostly retrieved one at a
time by its primary key, is there any performance or operational benefit
to having millions of such records split across multiple tables (say by
their application-level purpose) rather than all in one big table?

I am thinking of PG performance (handing queries against multiple tables
each with hundreds of thousands or rows, versus queries against a single
table with millions of rows), and operational performance (number of WAL
files created, pg_dump, vacuum, etc.).

If anybody has any tips, I'd much appreciate it.

Thanks,
David

Responses

Browse pgsql-general by date

  From Date Subject
Next Message David Kerr 2009-10-26 17:30:42 Postmaster taking 100% of the CPU
Previous Message Michael Gould 2009-10-26 15:32:05 Defining roles