From: | Greg Spiegelberg <gspiegelberg(at)gmail(dot)com> |
---|---|
To: | kuopo <spkuo(at)cs(dot)nctu(dot)edu(dot)tw> |
Cc: | Jorge Montero <jorge_montero(at)homedecorators(dot)com>, pgsql-performance(at)postgresql(dot)org |
Subject: | Re: how to handle a big table for data log |
Date: | 2010-07-27 20:02:08 |
Message-ID: | AANLkTikCsVPR5836FXk0quc4K-PogbxPd4yfhrig4HvP@mail.gmail.com |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-performance |
On Tue, Jul 20, 2010 at 9:51 PM, kuopo <spkuo(at)cs(dot)nctu(dot)edu(dot)tw> wrote:
> Let me make my problem clearer. Here is a requirement to log data from a
> set of objects consistently. For example, the object maybe a mobile phone
> and it will report its location every 30s. To record its historical trace, I
> create a table like
> *CREATE TABLE log_table
> (
> id integer NOT NULL,
> data_type integer NOT NULL,
> data_value double precision,
> ts timestamp with time zone NOT NULL,
> CONSTRAINT log_table_pkey PRIMARY KEY (id, data_type, ts)
> )*;
> In my location log example, the field data_type could be longitude or
> latitude.
>
>
I witnessed GridSQL in action many moons ago that managed a massive database
log table. From memory, the configuration was 4 database servers with a
cumulative 500M+ records and queries were running under 5ms. May be worth a
look.
http://www.enterprisedb.com/community/projects/gridsql.do
Greg
From | Date | Subject | |
---|---|---|---|
Next Message | Yeb Havinga | 2010-07-27 20:06:16 | Re: Slow query using the Cube contrib module. |
Previous Message | Kevin Grittner | 2010-07-27 18:32:18 | Re: Linux Filesystems again - Ubuntu this time |