Re: Using Postgres to store high volume streams of sensor readings

From: Scara Maccai <m_lists(at)yahoo(dot)it>
To: Ciprian Dorin Craciun <ciprian(dot)craciun(at)gmail(dot)com>
Cc: pgsql-general(at)postgresql(dot)org
Subject: Re: Using Postgres to store high volume streams of sensor readings
Date: 2008-11-23 18:10:36
Message-ID: 301912.85421.qm@web28106.mail.ukl.yahoo.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-general

> If you watch the speed, you'll see that the insert
> speed is the
> same, but the scan speed is worse (from 32k to 200).

As I said, I don't know a lot about these things.
But I would like someone to comment on this (so that maybe I will know something!):

1) I thought the poor insert performance was due to a "locality of access" in the index creation, hence I thought that since the timestamp is always increasing putting it as first column in the index should give a better insert speed, but it didn't: why????

2) I thought that given a query like:

select * from taba where clientid=2 and sensor=4 and timestamp between 'start_t' and 'end_t'

there shouldn't be a huge difference in speed between an index defined as (timestamp, clientid, sensorid) and another one defined as (clientid, sensor, timestamp) but I was VERY wrong: it's 1000 times worst. How is it possible???

It's obvious I don't know how multicolumn indexes work...
Can someone explain?

Browse pgsql-general by date

  From Date Subject
Next Message Scott Marlowe 2008-11-23 18:23:55 Re: Using Postgres to store high volume streams of sensor readings
Previous Message Daniel Verite 2008-11-23 18:04:37 mail list traffic