=?UTF-8?B?SHJpc2hpa2VzaCAo4KS54KWD4KS34KWA4KSV4KWH4KS2IOCkruClh+CkueClh+CkguCkpuCksw==?= =?UTF-8?B?4KWHKQ==?= <hashinclude(at)gmail(dot)com> writes:
> 2009/8/26 Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us>
>> Do the data columns have to be bigint, or would int be enough to hold
>> the expected range?
> For the 300-sec tables I probably can drop it to an integer, but for
> 3600 and 86400 tables (1 hr, 1 day) will probably need to be BIGINTs.
> However, given that I'm on a 64-bit platform (sorry if I didn't
> mention it earlier), does it make that much of a difference?
Even more so.
> How does a float ("REAL") compare in terms of SUM()s ?
Casting to float or float8 is certainly a useful alternative if you
don't mind the potential for roundoff error. On any non-ancient
platform those will be considerably faster than numeric. BTW,
I think that 8.4 might be noticeably faster than 8.3 for summing
floats, because of the switch to pass-by-value for them.
regards, tom lane
In response to
pgsql-performance by date
|Next:||From: Jeff Davis||Date: 2009-08-26 20:16:29|
|Subject: Re: How to create a multi-column index with 2 dates
|Previous:||From: हृषीकेश मेहेंदळ <email@example.com>||Date: 2009-08-26 18:39:40|
|Subject: Re: Performance issues with large amounts of time-series data|