Skip site navigation (1) Skip section navigation (2)

Re: Optimizing sum() operations

From: Mark Roberts <mailing_lists(at)pandapocket(dot)com>
To: Dobes Vandermeer <dobesv(at)gmail(dot)com>
Cc: Sean Davis <sdavis2(at)mail(dot)nih(dot)gov>, pgsql-novice(at)postgresql(dot)org
Subject: Re: Optimizing sum() operations
Date: 2008-10-03 23:06:43
Message-ID: 1223075203.12105.432.camel@localhost (view raw, whole thread or download thread mbox)
Lists: pgsql-novice
> This is a just a general theory that occurred to me, it's probably a
> pretty specialized kind of indexing that isn't supported by any RDBMS,
> but it's possible there could be a postgres plugin which did this,
> though.  Has anyone heard of something like that?

The problem with this is that there's generally more than one metadata
field for each key set - and frequently you might want to sum or max
each value.  I think the value of having this kind of structure would be
quickly mitigated by virtue of increasing node size and slowing node
retrieval times.

> I think that if there are a lot of rows that match the query, it'll
> take a long time, so I thought I'd start inquiring about whether
> anyone has a good algorithm for accelerating these kinds of queries.

The best solution that I've found for things like this is to look to
data warehousing: if you have a frequently used aggregation of facts,
then preaggregate (summarize) it and pull from there instead.


In response to


pgsql-novice by date

Next:From: Dobes VandermeerDate: 2008-10-03 23:48:40
Subject: Re: Optimizing sum() operations
Previous:From: Dobes VandermeerDate: 2008-10-03 22:38:50
Subject: Re: Optimizing sum() operations

Privacy Policy | About PostgreSQL
Copyright © 1996-2017 The PostgreSQL Global Development Group