I'm currently using sum() to compute historical values in reports;
basically select sum(amount) on records where date <= '...' and date
>= '...' who = X.
Of course, I have an index on the table for who and date, but that
still leaves potentially thousands of rows to scan.
First, should I be worried about the performance of this, or will
postgres sum a few thousand rows in a few milliseconds on a decent
system anyway?
Second, if this is a concern, is there a best practice for optimizing
these kinds of queries?
Any help or advice appreciated!