> I had written a piece of code about two years ago that used the
> aggregate feature of PostgreSQL to create an array of integers from an
> aggregate, as:
> select int_array_aggregate( column ) from table group by column
> While it seems pointless to create an array on a select, it has a
> purpose in OLAP. For instance, suppose you do this:
> create table fast_lookup as select reference,
> int_array_aggregate(result) from table group by result
> The "fast_lookup" table now has all the result entries as an array in a
> single row. In the systems that I have used this, it has provided a
> dramatic improvement, especially when you have a high number of
> identical "reference" entries in a classic "one to many" table.
> The question is, would a more comprehensive solution be wanted?
> Possible? Something like:
> create table fast_lookup as select reference, aggregate_array( field )
> from table group by field
> Where the function aggregate_array takes any number of data types.
> Any thoughts? I think I need to fix the code in the current
> /contrib/intagg anyway, so is it worth doing the extra work to included
> multiple data types?
It's also useful in conjunction with statistically processing. There is
a array_accum function in PL/R; I just made a post to the SQL list the
other day on this.
Here's the meat of it:
CREATE OR REPLACE FUNCTION array_accum (_name, name)
CREATE AGGREGATE accumulate (
sfunc = array_accum,
basetype = name,
stype = _name
regression=# SELECT accumulate(tablename) as cruft FROM pg_tables WHERE
tablename LIKE 'c%';
and download at:
I'd be happy to split the array functions out of PL/R and sumbit them to
PATCHES if there is any interest.
In response to
pgsql-hackers by date
|Next:||From: Bruce Momjian||Date: 2003-03-05 21:11:20|
|Subject: I am back|
|Previous:||From: mlw||Date: 2003-03-05 20:47:18|
|Subject: Aggregate "rollup"|