From: | Joe Conway <mail(at)joeconway(dot)com> |
---|---|
To: | Josh Berkus <josh(at)agliodbs(dot)com> |
Cc: | pgsql-performance(at)postgresql(dot)org |
Subject: | Re: Speeding up aggregates |
Date: | 2002-12-06 18:10:45 |
Message-ID: | 3DF0E825.7020400@joeconway.com |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-performance |
Josh Berkus wrote:
> While effective, this approach is costly in terms of update/insert
> processing. It is also limited to whatever aggregate requests you have
> anticipated ... it does no good for aggregates over a user-defined
> range.
I think this is where Oracle's materialized views come into play.
>
> What have other Postgres users done to speed up aggregates on large
> tables?
I've found that in most real life applications, expensive aggregate queries
tend to be needed for management reporting, which does not need to be based on
up-to-the-second fresh data. Normally for these types of reports a summary
through say last night at midnight is perfectly adequate.
The simplest solution in these cases is to build a table to hold your
partially or completely summarized data, then report off of that. Use cron to
refresh these summary tables at convenient times (daily, every 2 hours, or
whatever).
Joe
From | Date | Subject | |
---|---|---|---|
Next Message | Josh Berkus | 2002-12-06 18:13:40 | Re: ORDER BY ... LIMIT.. performance |
Previous Message | Josh Berkus | 2002-12-06 17:30:46 | Speeding up aggregates |