Re: Using Postgres to store high volume streams of sensor readings

From: "Ciprian Dorin Craciun" <ciprian(dot)craciun(at)gmail(dot)com>
To: "marcin mank" <marcin(dot)mank(at)gmail(dot)com>
Cc: "Michal Szymanski" <dyrex(at)poczta(dot)onet(dot)pl>, pgsql-general(at)postgresql(dot)org
Subject: Re: Using Postgres to store high volume streams of sensor readings
Date: 2008-11-24 05:27:05
Message-ID: 8e04b5820811232127g6ff6e8balec2145b144e67c9f@mail.gmail.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-general

On Mon, Nov 24, 2008 at 3:42 AM, marcin mank <marcin(dot)mank(at)gmail(dot)com> wrote:
>> Yes, the figures are like this:
>> * average number of raw inserts / second (without any optimization
>> or previous aggregation): #clients (~ 100 thousand) * #sensors (~ 10)
>> / 6seconds = 166 thousand inserts / second...
>
> this is average?
> 166 000 * 20 bytes per record * 86400 seconds per day = 280GB / day ,
> not counting indices.
>
> What is the time span You want to have the data from?
>
> Greetings
> Marcin

Well I'm not sure for the archival period... Maybe a day, maybe a
week... For the moment I'm just struggling with the insert speed.
(We could also use sharding -- horizontal partitioning on
different machines -- and this wourd reduce the load...)

Ciprian.

In response to

Browse pgsql-general by date

  From Date Subject
Next Message Thomas Markus 2008-11-24 07:19:00 Re: Returning schema name with table name
Previous Message Scott Marlowe 2008-11-24 02:03:37 Re: Running postgresql as a VMware ESx client