Skip site navigation (1) Skip section navigation (2)

Re: Performance issues with large amounts of time-series data

From: Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us>
To: Hrishikesh (हृषीकेश मेहेंदळे) <hashinclude(at)gmail(dot)com>
Cc: pgsql-performance(at)postgresql(dot)org
Subject: Re: Performance issues with large amounts of time-series data
Date: 2009-08-26 18:52:14
Message-ID: 18555.1251312734@sss.pgh.pa.us (view raw or flat)
Thread:
Lists: pgsql-performance
=?UTF-8?B?SHJpc2hpa2VzaCAo4KS54KWD4KS34KWA4KSV4KWH4KS2IOCkruClh+CkueClh+CkguCkpuCksw==?= =?UTF-8?B?4KWHKQ==?= <hashinclude(at)gmail(dot)com> writes:
> 2009/8/26 Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us>
>> Do the data columns have to be bigint, or would int be enough to hold
>> the expected range?

> For the 300-sec tables I probably can drop it to an integer, but for
> 3600 and 86400 tables (1 hr, 1 day) will probably need to be BIGINTs.
> However, given that I'm on a 64-bit platform (sorry if I didn't
> mention it earlier), does it make that much of a difference?

Even more so.

> How does a float ("REAL") compare in terms of SUM()s ?

Casting to float or float8 is certainly a useful alternative if you
don't mind the potential for roundoff error.  On any non-ancient
platform those will be considerably faster than numeric.  BTW,
I think that 8.4 might be noticeably faster than 8.3 for summing
floats, because of the switch to pass-by-value for them.

			regards, tom lane

In response to

Responses

pgsql-performance by date

Next:From: Jeff DavisDate: 2009-08-26 20:16:29
Subject: Re: How to create a multi-column index with 2 dates using 'gist'?
Previous:From: हृषीकेश मेहेंदळ <hashinclude@gmail.com>Date: 2009-08-26 18:39:40
Subject: Re: Performance issues with large amounts of time-series data

Privacy Policy | About PostgreSQL
Copyright © 1996-2014 The PostgreSQL Global Development Group