Re: Storing sensor data

From: Nikolas Everett <nik9000(at)gmail(dot)com>
To: Heikki Linnakangas <heikki(dot)linnakangas(at)enterprisedb(dot)com>
Cc: Ivan Voras <ivoras(at)freebsd(dot)org>, pgsql-performance(at)postgresql(dot)org
Subject: Re: Storing sensor data
Date: 2009-05-28 13:38:41
Message-ID: d4e11e980905280638j41f25d0fq594e181e36cd0d62@mail.gmail.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-performance

Option 1 is about somewhere between 2 and 3 times more work for the database
than option 2.

Do you need every sensor update to hit the database? In a situation like
this I'd be tempted to keep the current values in the application itself and
then sweep them all into the database periodically. If some of the sensor
updates should hit the database faster, you could push those in as you get
them rather than wait for your sweeper. This setup has the advantage that
you c0an scale up the number of sensors and the frequency the sensors report
without having to scale up the disks. You can also do the sweeping all in
one transaction or even in one batch update.

On Thu, May 28, 2009 at 9:31 AM, Heikki Linnakangas <
heikki(dot)linnakangas(at)enterprisedb(dot)com> wrote:

> Ivan Voras wrote:
>
>> The volume of sensor data is potentially huge, on the order of 500,000
>> updates per hour. Sensor data is few numeric(15,5) numbers.
>>
>
> Whichever design you choose, you should also consider partitioning the
> data.
>

Amen. Do that.

In response to

Responses

Browse pgsql-performance by date

  From Date Subject
Next Message Alexander Staubo 2009-05-28 13:39:53 Re: Storing sensor data
Previous Message Scot Kreienkamp 2009-05-28 13:34:01 Re: Postgres Clustering