From: | "Sean Davis" <sdavis2(at)mail(dot)nih(dot)gov> |
---|---|
To: | "Dobes Vandermeer" <dobesv(at)gmail(dot)com> |
Cc: | pgsql-novice(at)postgresql(dot)org |
Subject: | Re: Optimizing sum() operations |
Date: | 2008-10-03 11:51:38 |
Message-ID: | 264855a00810030451o66171b72uc96e7f302dfd7052@mail.gmail.com |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-novice |
On Fri, Oct 3, 2008 at 4:51 AM, Dobes Vandermeer <dobesv(at)gmail(dot)com> wrote:
> I'm currently using sum() to compute historical values in reports;
> basically select sum(amount) on records where date <= '...' and date
>>= '...' who = X.
>
> Of course, I have an index on the table for who and date, but that
> still leaves potentially thousands of rows to scan.
>
> First, should I be worried about the performance of this, or will
> postgres sum a few thousand rows in a few milliseconds on a decent
> system anyway?
>
> Second, if this is a concern, is there a best practice for optimizing
> these kinds of queries?
You'll need to test to see what performance you get. That said,
indexing is a good place to start. You can always run explain and
explain analyze on the queries to double-check the planner.
Sean
From | Date | Subject | |
---|---|---|---|
Next Message | Joshua Tolley | 2008-10-03 12:11:20 | Re: Installing postgres on Vista, can't connect remotely |
Previous Message | Steve T | 2008-10-03 11:30:13 | Re: Forcing order of Joins etc |