Optimizing sum() operations

From: "Dobes Vandermeer" <dobesv(at)gmail(dot)com>
To: pgsql-novice(at)postgresql(dot)org
Subject: Optimizing sum() operations
Date: 2008-10-03 08:51:06
Message-ID: 7324d9a20810030151n2341c5dcide9681bf74daa4cc@mail.gmail.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-novice

I'm currently using sum() to compute historical values in reports;
basically select sum(amount) on records where date <= '...' and date
>= '...' who = X.

Of course, I have an index on the table for who and date, but that
still leaves potentially thousands of rows to scan.

First, should I be worried about the performance of this, or will
postgres sum a few thousand rows in a few milliseconds on a decent
system anyway?

Second, if this is a concern, is there a best practice for optimizing
these kinds of queries?

Any help or advice appreciated!

Responses

Browse pgsql-novice by date

  From Date Subject
Next Message Steve T 2008-10-03 09:01:45 Forcing order of Joins etc
Previous Message Bastiaan Olij 2008-10-03 03:17:02 Installing postgres on Vista, can't connect remotely