Re: Storing sensor data

From: Alexander Staubo <alex(at)bengler(dot)no>
To: Ivan Voras <ivoras(at)freebsd(dot)org>
Cc: pgsql-performance(at)postgresql(dot)org
Subject: Re: Storing sensor data
Date: 2009-05-28 15:34:18
Message-ID: 88daf38c0905280834o278994a6w54b7f0bf67bef97c@mail.gmail.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-performance

On Thu, May 28, 2009 at 5:06 PM, Ivan Voras <ivoras(at)freebsd(dot)org> wrote:
>> If you require precise data with the ability to filter, aggregate and
>> correlate over multiple dimensions, something like Hadoop -- or one of
>> the Hadoop-based column database implementations, such as HBase or
>> Hypertable -- might be a better option, combined with MapReduce/Pig to
>> execute analysis jobs
>
> This looks like an interesting idea to investigate. Do you have more
> experience with such databases? How do they fare with the following
> requirements:

We might want to take this discussion off-list, since this list is
about PostgreSQL. Feel free to reply privately.

> * Storing large datasets (do they pack data well in the database? No
> wasted space like in e.g. hash tables?)

Columns databases like Hypertable and HBase are designed to store data
quite efficiently. Each column is grouped in a unit called a column
family and stored together in chunks usually called SSTables, after
the Google Bigtable paper. (When you design your database you must
determine which columns are usually accessed together, in other to
avoid incurring the I/O cost of loading non-pertinent columns.) Each
SSTable is like a partition. When storing a chunk to disk, the column
is compressed, each column being stored sequentially for optimal
compression.

I have used HBase, but I don't have any feel for how much space it
wastes. In theory, though, space usage should be more optimal than
with PostgreSQL. I have used Cassandra, another column database I
would also recommend, which is very efficient. In many ways I prefer
Cassandra to HBase -- it's leaner, completely decentralized (no single
point of failure) and independent of the rather huge, monolithic
Hadoop project -- but it does not currently support MapReduce. If you
want to implement some kind of distributed analysis system, you will
need to write yourself.

All three column stores support mapping information by a time
dimension. Each time you write a key, you also provide a timestamp. In
theory you can retain the entire history of a single key. HBase lets
you specify how many revisions to retain; not sure what Cassandra
does. However, Cassandra introduces the notion of a "supercolumn
family", another grouping level which lets you use the timestamp as a
column key. To explain how this works, consider the following inserts:

# insert(table_name, key, column, value, timestamp)
db.insert("readings", "temperature_sensor", "value:1", 23, "200905281725023")
db.insert("readings", "temperature_sensor", "value:2", 27, "200905281725023")
db.insert("readings", "temperature_sensor", "value:3", 21, "200905281725023")

The resulting "temperature_sensor" row will have three column values:

value:1 value:2 value:3
23 27 21

You can keep adding values and the row will get bigger. Because
columns are dynamic, only that row will grow; all other rows will stay
the same size. Cassandra users usually use the column name as a kind
of value -- image it's like subindexing an array.

As you can see, I also passed a timestamp (the 2009.. bit), which is
used for versioning. Since anyone can write to any node in a cluster,
Cassandra needs to be able to resolve conflicts.

Note that these databases are inherently distributed. You can run them
on a single node just fine -- and that might be appropriate in your
case -- but they really shine when you run a whole cluster. Cassandra
is multi-master, so you can just boot up a number of nodes and read
from/write to any of them.

> * Retrieving specific random records based on a timestamp or record ID?

Absolutely.

> * Storing "inifinite" datasets (i.e. whose size is not known in
> advance - cf. e.g. hash tables)

This is one area where column databases are better than relational
ones. The schema is completely dynamic, and you can treat it as a hash
table.

> On the other hand, we could periodically transfer data from PostgreSQL
> into a simpler database (e.g. BDB) for archival purposes (at the
> expense of more code). Would they be better suited?

Considering the size and sequential nature of the data, I think they
would be better match than a simple key-value store like BDB.

A.

In response to

Responses

Browse pgsql-performance by date

  From Date Subject
Next Message Grzegorz Jaśkiewicz 2009-05-28 15:39:55 Re: Storing sensor data
Previous Message Ivan Voras 2009-05-28 15:24:33 Re: Storing sensor data