From: | "Dobes Vandermeer" <dobesv(at)gmail(dot)com> |
---|---|
To: | "Mark Roberts" <mailing_lists(at)pandapocket(dot)com> |
Cc: | pgsql-novice(at)postgresql(dot)org |
Subject: | Re: Optimizing sum() operations |
Date: | 2008-10-03 23:48:40 |
Message-ID: | 7324d9a20810031648u34dad60co7e51b0c88031e055@mail.gmail.com |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-novice |
On Fri, Oct 3, 2008 at 4:06 PM, Mark Roberts
<mailing_lists(at)pandapocket(dot)com> wrote:
>> I think that if there are a lot of rows that match the query, it'll
>> take a long time, so I thought I'd start inquiring about whether
>> anyone has a good algorithm for accelerating these kinds of queries.
>
> The best solution that I've found for things like this is to look to
> data warehousing: if you have a frequently used aggregation of facts,
> then preaggregate (summarize) it and pull from there instead.
Do you mean creating another table and manually caching the values in
there? I'm not sure what "data warehousing" means in this context.
--
Dobes Vandermeer
Director, Habitsoft Inc.
dobesv(at)habitsoft(dot)com
778-891-2922
From | Date | Subject | |
---|---|---|---|
Next Message | Harold A. Giménez Ch. | 2008-10-04 01:10:34 | Re: Optimizing sum() operations |
Previous Message | Mark Roberts | 2008-10-03 23:06:43 | Re: Optimizing sum() operations |