Re: db performance/design question

From: "Scott Marlowe" <scott(dot)marlowe(at)gmail(dot)com>
To: chambers(at)imageworks(dot)com
Cc: pgsql-performance(at)postgresql(dot)org
Subject: Re: db performance/design question
Date: 2007-09-12 21:58:34
Message-ID: dcc563d10709121458y2d8b921etb629aae32a59be3b@mail.gmail.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-performance

On 9/12/07, Matt Chambers <chambers(at)imageworks(dot)com> wrote:
>
>
> I'm designing a system that will be doing over a million inserts/deletes on
> a single table every hour. Rather than using a single table, it is possible
> for me to partition the data into multiple tables if I wanted to, which
> would be nice because I can just truncate them when I don't need them. I
> could even use table spaces to split the IO load over multiple filers. The
> application does not require all this data be in the same table. The data
> is fairly temporary, it might last 5 seconds, it might last 2 days, but it
> will all be deleted eventually and different data will be created.
>
> Considering a single table would grow to 10mil+ rows at max, and this
> machine will sustain about 25mbps of insert/update/delete traffic 24/7 -
> 365, will I be saving much by partitioning data like that?

This is the exact kind of application for which partitioning shines,
especially if you can do the inserts directly to the partitions
without having to create rules or triggers to handle it. If you have
to point at the master table, stick to triggers as they're much more
efficient at slinging data to various sub tables.

In response to

Browse pgsql-performance by date

  From Date Subject
Next Message Tom Lane 2007-09-13 03:21:13 Re: pg_dump blocking create database?
Previous Message Matt Chambers 2007-09-12 20:33:25 db performance/design question