Re: Strategy for doing number-crunching

From: Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us>
To: Matthew Foster <matthew(dot)foster(at)noaa(dot)gov>
Cc: pgsql-novice(at)postgresql(dot)org
Subject: Re: Strategy for doing number-crunching
Date: 2012-01-04 21:11:21
Message-ID: 22136.1325711481@sss.pgh.pa.us
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-novice

Matthew Foster <matthew(dot)foster(at)noaa(dot)gov> writes:
> On Wed, Jan 4, 2012 at 10:48 AM, Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us> wrote:
>> Matthew Foster <matthew(dot)foster(at)noaa(dot)gov> writes:
>>> We have a database with approximately 130M rows, and we need to produce
>>> statistics (e.g. mean, standard deviation, etc.) on the data. Right now,
>>> we're generating these stats via a single SELECT, and it is extremely
>>> slow...like it can take hours to return results.

>> What datatype are the columns being averaged? If "numeric", consider
>> casting to float8 before applying the aggregates. You'll lose some
>> precision but it'll likely be orders of magnitude faster.

> The data are type double.

Hmm. In that case I think you have some other problem that's hidden in
details you didn't show us. It should not take "hours" to process only
130M rows. This would best be taken up on pgsql-performance; please see
http://wiki.postgresql.org/wiki/Slow_Query_Questions

regards, tom lane

In response to

Responses

Browse pgsql-novice by date

  From Date Subject
Next Message Carlos Mennens 2012-01-04 21:30:07 Verify My Database Isn't Slammed
Previous Message Matthew Foster 2012-01-04 18:28:23 Re: Strategy for doing number-crunching