Re: Large number of tables slow insert

From: "H(dot) Hall" <hhall1001(at)reedyriver(dot)com>
To: Loic Petit <tls(dot)wydd(at)free(dot)fr>, pgsql-performance(at)postgresql(dot)org
Subject: Re: Large number of tables slow insert
Date: 2008-08-24 21:01:58
Message-ID: 48B1CC46.8080502@reedyriver.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-performance

Loic Petit wrote:
> Hi,
>
> I use Postgresql 8.3.1-1 to store a lot of data coming from a large
> amount of sensors. In order to have good performances on querying by
> timestamp on each sensor, I partitionned my measures table for each
> sensor. Thus I create a lot of tables.
> I simulated a large sensor network with 3000 nodes so I have ~3000
> tables. And it appears that each insert (in separate transactions) in
> the database takes about 300ms (3-4 insert per second) in tables where
> there is just few tuples (< 10). I think you can understand that it's
> not efficient at all because I need to treat a lot of inserts.
Can you tell us what kind of application this is? It sounds like a
control systems application where you will write the current values of
the sensors with each scan of a PLC. If so, is that a good idea? Also
is 3,000 sensors realistic? That would be a lot of sensors for one
control system.
>
> Do you have any idea why it is that slow ? and how can have good insert ?
How often do you write data for a sensor?
Once write per sensor per second = 3,000 writes per second
That would be an insert plus updates to each of your 6 indexes every
0.33 ms .

Is that a good idea? Is there a better strategy? What are you measuring
with the instruments e.g. is this a process plant or manufacturing
facility? What will people do with this data?
>
> My test machine: Intel p4 2.4ghz, 512mb ram, Ubuntu Server (ext3)
> iostat tells me that I do : 0.5MB/s reading and ~6-7MB/s writing while
> constant insert
>
> Here is the DDL of the measures tables:
> -------------------------------------------------------
> CREATE TABLE measures_0
> (
> "timestamp" timestamp without time zone,
> storedtime timestamp with time zone,
> count smallint,
> "value" smallint[]
> )
> WITH (OIDS=FALSE);
> CREATE INDEX measures_0_1_idx
> ON measures_0
> USING btree
> ((value[1]));
>
> -- Index: measures_0_2_idx
> CREATE INDEX measures_0_2_idx
> ON measures_0
> USING btree
> ((value[2]));
>
> -- Index: measures_0_3_idx
> CREATE INDEX measures_0_3_idx
> ON measures_0
> USING btree
> ((value[3]));
>
> -- Index: measures_0_count_idx
> CREATE INDEX measures_0_count_idx
> ON measures_0
> USING btree
> (count);
>
> -- Index: measures_0_timestamp_idx
> CREATE INDEX measures_0_timestamp_idx
> ON measures_0
> USING btree
> ("timestamp");
>
> -- Index: measures_0_value_idx
> CREATE INDEX measures_0_value_idx
> ON measures_0
> USING btree
> (value);
> -------------------------------------------------------
>
> Regards
>
> Loïc Petit
>
> --------------------------------
>
>

--
H. Hall
ReedyRiver Group LLC
http://www.reedyriver.com

In response to

Browse pgsql-performance by date

  From Date Subject
Next Message Peter Schuller 2008-08-24 21:22:43 Re: NOW vs CURRENT_DATE
Previous Message Scott Carey 2008-08-24 20:59:18 Re: Large number of tables slow insert