Re: Large number of tables slow insert

From: Matthew Wakeling <matthew(at)flymine(dot)org>
To: pgsql-performance(at)postgresql(dot)org
Subject: Re: Large number of tables slow insert
Date: 2008-08-26 12:50:48
Message-ID: alpine.DEB.1.10.0808261348190.4454@aragorn.flymine.org
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-performance

On Sat, 23 Aug 2008, Loic Petit wrote:
> I use Postgresql 8.3.1-1 to store a lot of data coming from a large amount of sensors. In order to have good
> performances on querying by timestamp on each sensor, I partitionned my measures table for each sensor. Thus I create
> a lot of tables.

As far as I can see, you are having performance problems as a direct
result of this design decision, so it may be wise to reconsider. If you
have an index on both the sensor identifier and the timestamp, it should
perform reasonably well. It would scale a lot better with thousands of
sensors too.

Matthew

--
And why do I do it that way? Because I wish to remain sane. Um, actually,
maybe I should just say I don't want to be any worse than I already am.
- Computer Science Lecturer

In response to

Responses

Browse pgsql-performance by date

  From Date Subject
Next Message Jerry Champlin 2008-08-26 15:27:48 Autovacuum does not stay turned off
Previous Message Jonah H. Harris 2008-08-25 14:20:37 Re: Identifying the nature of blocking I/O