Skip site navigation (1) Skip section navigation (2)

Re: how to handle a big table for data log

From: Josh Berkus <josh(at)agliodbs(dot)com>
To: kuopo <spkuo(at)cs(dot)nctu(dot)edu(dot)tw>
Cc: pgsql-performance(at)postgresql(dot)org
Subject: Re: how to handle a big table for data log
Date: 2010-07-27 21:13:42
Message-ID: 4C4F4C06.1050404@agliodbs.com (view raw or flat)
Thread:
Lists: pgsql-performance
On 7/20/10 8:51 PM, kuopo wrote:
> Let me make my problem clearer. Here is a requirement to log data from a
> set of objects consistently. For example, the object maybe a mobile
> phone and it will report its location every 30s. To record its
> historical trace, I create a table like
> /CREATE TABLE log_table
> (
>   id integer NOT NULL,
>  data_type integer NOT NULL,
>  data_value double precision,
>  ts timestamp with time zone NOT NULL,
>  CONSTRAINT log_table_pkey PRIMARY KEY (id, data_type, ts)
> )/;
> In my location log example, the field data_type could be longitude or
> latitude.

If what you have is longitude and latitude, why this brain-dead EAV
table structure?  You're making the table twice as large and half as
useful for no particular reason.

Use the "point" datatype instead of anonymizing the data.

-- 
                                  -- Josh Berkus
                                     PostgreSQL Experts Inc.
                                     http://www.pgexperts.com

In response to

pgsql-performance by date

Next:From: Tom LaneDate: 2010-07-27 22:18:50
Subject: Re: Slow query using the Cube contrib module.
Previous:From: Tom LaneDate: 2010-07-27 20:40:02
Subject: Re: Pooling in Core WAS: Need help in performance tuning.

Privacy Policy | About PostgreSQL
Copyright © 1996-2014 The PostgreSQL Global Development Group