Re: Large number of tables slow insert

From: "Scott Marlowe" <scott(dot)marlowe(at)gmail(dot)com>
To: "Matthew Wakeling" <matthew(at)flymine(dot)org>
Cc: pgsql-performance(at)postgresql(dot)org
Subject: Re: Large number of tables slow insert
Date: 2008-08-26 15:29:15
Message-ID: dcc563d10808260829s6d397b7egd364589c9c5b16b1@mail.gmail.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-performance

On Tue, Aug 26, 2008 at 6:50 AM, Matthew Wakeling <matthew(at)flymine(dot)org> wrote:
> On Sat, 23 Aug 2008, Loic Petit wrote:
>>
>> I use Postgresql 8.3.1-1 to store a lot of data coming from a large amount
>> of sensors. In order to have good
>> performances on querying by timestamp on each sensor, I partitionned my
>> measures table for each sensor. Thus I create
>> a lot of tables.
>
> As far as I can see, you are having performance problems as a direct result
> of this design decision, so it may be wise to reconsider. If you have an
> index on both the sensor identifier and the timestamp, it should perform
> reasonably well. It would scale a lot better with thousands of sensors too.

Properly partitioned, I'd expect one big table to outperform 3,000
sparsely populated tables.

In response to

Browse pgsql-performance by date

  From Date Subject
Next Message hubert depesz lubaczewski 2008-08-26 16:14:27 Re: Autovacuum does not stay turned off
Previous Message Jerry Champlin 2008-08-26 15:27:48 Autovacuum does not stay turned off