From: | Mark Roberts <mailing_lists(at)pandapocket(dot)com> |
---|---|
To: | Dobes Vandermeer <dobesv(at)gmail(dot)com> |
Cc: | Sean Davis <sdavis2(at)mail(dot)nih(dot)gov>, pgsql-novice(at)postgresql(dot)org |
Subject: | Re: Optimizing sum() operations |
Date: | 2008-10-03 23:06:43 |
Message-ID: | 1223075203.12105.432.camel@localhost |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-novice |
> This is a just a general theory that occurred to me, it's probably a
> pretty specialized kind of indexing that isn't supported by any RDBMS,
> but it's possible there could be a postgres plugin which did this,
> though. Has anyone heard of something like that?
The problem with this is that there's generally more than one metadata
field for each key set - and frequently you might want to sum or max
each value. I think the value of having this kind of structure would be
quickly mitigated by virtue of increasing node size and slowing node
retrieval times.
> I think that if there are a lot of rows that match the query, it'll
> take a long time, so I thought I'd start inquiring about whether
> anyone has a good algorithm for accelerating these kinds of queries.
The best solution that I've found for things like this is to look to
data warehousing: if you have a frequently used aggregation of facts,
then preaggregate (summarize) it and pull from there instead.
-Mark
From | Date | Subject | |
---|---|---|---|
Next Message | Dobes Vandermeer | 2008-10-03 23:48:40 | Re: Optimizing sum() operations |
Previous Message | Dobes Vandermeer | 2008-10-03 22:38:50 | Re: Optimizing sum() operations |