Skip site navigation (1) Skip section navigation (2)

Large number of tables slow insert

From: "Loic Petit" <tls(dot)wydd(at)free(dot)fr>
To: pgsql-performance(at)postgresql(dot)org
Subject: Large number of tables slow insert
Date: 2008-08-23 01:41:27
Message-ID: (view raw or whole thread)
Lists: pgsql-performance

I use Postgresql 8.3.1-1 to store a lot of data coming from a large amount
of sensors. In order to have good performances on querying by timestamp on
each sensor, I partitionned my measures table for each sensor. Thus I create
a lot of tables.
I simulated a large sensor network with 3000 nodes so I have ~3000 tables.
And it appears that each insert (in separate transactions) in the database
takes about 300ms (3-4 insert per second) in tables where there is just few
tuples (< 10). I think you can understand that it's not efficient at all
because I need to treat a lot of inserts.

Do you have any idea why it is that slow ? and how can have good insert ?

My test machine: Intel p4 2.4ghz, 512mb ram, Ubuntu Server (ext3)
iostat tells me that I do : 0.5MB/s reading and ~6-7MB/s writing while
constant insert

Here is the DDL of the measures tables:
CREATE TABLE measures_0
 "timestamp" timestamp without time zone,
 storedtime timestamp with time zone,
 count smallint,
 "value" smallint[]
CREATE INDEX measures_0_1_idx
 ON measures_0
 USING btree

-- Index: measures_0_2_idx
CREATE INDEX measures_0_2_idx
 ON measures_0
 USING btree

-- Index: measures_0_3_idx
CREATE INDEX measures_0_3_idx
 ON measures_0
 USING btree

-- Index: measures_0_count_idx
CREATE INDEX measures_0_count_idx
 ON measures_0
 USING btree

-- Index: measures_0_timestamp_idx
CREATE INDEX measures_0_timestamp_idx
 ON measures_0
 USING btree

-- Index: measures_0_value_idx
CREATE INDEX measures_0_value_idx
 ON measures_0
 USING btree


Loïc Petit


pgsql-performance by date

Next:From: Shane AmblerDate: 2008-08-23 07:39:27
Subject: Re: The state of PG replication in 2008/Q2?
Previous:From: Gregory StarkDate: 2008-08-23 00:39:56
Subject: Re: Big delete on big table... now what?

Privacy Policy | About PostgreSQL
Copyright © 1996-2015 The PostgreSQL Global Development Group