The recommended partitioning guideline is if your table exceeds 2G
1. Purging old data very quickly (this is one of the biggest
benefits...especially if you have to purge very often...dont even
think of using DELETE)
2. Performance for certain types of queries where full table scans
benefit from a smaller table size (and hence the smaller partitio will
You have to maintain scripts to drop/create partitions. Partitions are
not first-class objects in Postgres yet (hopefully in a future
If you are not sure about how large your tables will get...bite the
bullet and partition your data. You will be glad you did so.
On Thu, Dec 24, 2009 at 6:42 AM, Radhika S <rs88820(at)gmail(dot)com> wrote:
> We currently have a large table (9 million rows) of which only the last
> couple of days worth of data is queried on a regular basis.
> To improve performance we are thinking of partitioning the table.
> One idea is:
> Current_data = last days worth
> archive_data < today (goes back to 2005)
> The idea being proposed at work is:
> current_data = today's data
> prior years data - be broken down into one table per day
> archive_data - data older than a year.
> My question is:
> a) Does postgres suffer a performance hit say if there are 200 child tables.
> b) What about aggregation between dates in the last year. eg total sales for
> firm a for the last year. It will need to look up n number of tables.
> Any ideas, tips, gotchas in implementing partitioning would be welcome. It
> is a somewhat mission critical (not trading, so not as mission critical)
> How expensive is maintaining so many partitions both in terms of my writing
> / maintaining scripts and performance.
> Thanks in advance.
In response to
pgsql-performance by date
|Next:||From: Jesper Krogh||Date: 2010-01-01 11:48:43|
|Subject: Message queue table - strange performance drop with changing limit
|Previous:||From: roopasatish||Date: 2009-12-29 20:11:51|
|Subject: Re: |