I'm currently using sum() to compute historical values in reports;
basically select sum(amount) on records where date <= '...' and date
>= '...' who = X.
Of course, I have an index on the table for who and date, but that
still leaves potentially thousands of rows to scan.
First, should I be worried about the performance of this, or will
postgres sum a few thousand rows in a few milliseconds on a decent
Second, if this is a concern, is there a best practice for optimizing
these kinds of queries?
Any help or advice appreciated!
pgsql-novice by date
|Next:||From: Steve T||Date: 2008-10-03 09:01:45|
|Subject: Forcing order of Joins etc|
|Previous:||From: Bastiaan Olij||Date: 2008-10-03 03:17:02|
|Subject: Installing postgres on Vista, can't connect remotely|