Skip site navigation (1) Skip section navigation (2)

Re: Speeding up aggregates

From: Joe Conway <mail(at)joeconway(dot)com>
To: Josh Berkus <josh(at)agliodbs(dot)com>
Cc: pgsql-performance(at)postgresql(dot)org
Subject: Re: Speeding up aggregates
Date: 2002-12-06 18:10:45
Message-ID: 3DF0E825.7020400@joeconway.com (view raw or flat)
Thread:
Lists: pgsql-performance
Josh Berkus wrote:
> While effective, this approach is costly in terms of update/insert
> processing.  It is also limited to whatever aggregate requests you have
> anticipated ... it does no good for aggregates over a user-defined
> range.

I think this is where Oracle's materialized views come into play.

> 
> What have other Postgres users done to speed up aggregates on large
> tables?

I've found that in most real life applications, expensive aggregate queries 
tend to be needed for management reporting, which does not need to be based on 
up-to-the-second fresh data. Normally for these types of reports a summary 
through say last night at midnight is perfectly adequate.

The simplest solution in these cases is to build a table to hold your 
partially or completely summarized data, then report off of that. Use cron to 
refresh these summary tables at convenient times (daily, every 2 hours, or 
whatever).

Joe


In response to

pgsql-performance by date

Next:From: Josh BerkusDate: 2002-12-06 18:13:40
Subject: Re: ORDER BY ... LIMIT.. performance
Previous:From: Josh BerkusDate: 2002-12-06 17:30:46
Subject: Speeding up aggregates

Privacy Policy | About PostgreSQL
Copyright © 1996-2014 The PostgreSQL Global Development Group