CREATE AGGREGATE name ( BASETYPE = input_data_type, SFUNC = sfunc, STYPE = state_data_type [ , FINALFUNC = ffunc ] [ , INITCOND = initial_condition ] )
CREATE AGGREGATE defines a new aggregate function. Some basic and commonly-used aggregate functions are included with the distribution; they are documented in Section 9.15. If one defines new types or needs an aggregate function not already provided, then CREATE AGGREGATE can be used to provide the desired features.
If a schema name is given (for example, CREATE AGGREGATE myschema.myagg ...) then the aggregate function is created in the specified schema. Otherwise it is created in the current schema.
An aggregate function is identified by its name and input data type. Two aggregates in the same schema can have the same name if they operate on different input types. The name and input data type of an aggregate must also be distinct from the name and input data type(s) of every ordinary function in the same schema.
An aggregate function is made from one or two ordinary functions: a state transition function sfunc, and an optional final calculation function ffunc. These are used as follows:
sfunc( internal-state, next-data-item ) ---> next-internal-state ffunc( internal-state ) ---> aggregate-value
PostgreSQL creates a temporary variable of data type stype to hold the current internal state of the aggregate. At each input data item, the state transition function is invoked to calculate a new internal state value. After all the data has been processed, the final function is invoked once to calculate the aggregate's return value. If there is no final function then the ending state value is returned as-is.
An aggregate function may provide an initial condition, that is, an initial value for the internal state value. This is specified and stored in the database as a column of type text, but it must be a valid external representation of a constant of the state value data type. If it is not supplied then the state value starts out null.
If the state transition function is declared "strict", then it cannot be called with null
inputs. With such a transition function, aggregate execution
behaves as follows. Null input values are ignored (the function
is not called and the previous state value is retained). If the
initial state value is null, then the first nonnull input value
replaces the state value, and the transition function is invoked
beginning with the second nonnull input value. This is handy for
implementing aggregates like
Note that this behavior is only available when state_data_type is the same as input_data_type. When these types are
different, you must supply a nonnull initial condition or use a
nonstrict transition function.
If the state transition function is not strict, then it will be called unconditionally at each input value, and must deal with null inputs and null transition values for itself. This allows the aggregate author to have full control over the aggregate's handling of null values.
If the final function is declared "strict", then it will not be called when the
ending state value is null; instead a null result will be
returned automatically. (Of course this is just the normal
behavior of strict functions.) In any case the final function has
the option of returning a null value. For example, the final
avg returns null when
it sees there were zero input rows.
The name (optionally schema-qualified) of the aggregate function to create.
The input data type on which this aggregate function
operates. This can be specified as "ANY" for an aggregate that does not examine
its input values (an example is
The name of the state transition function to be called for each input data value. This is normally a function of two arguments, the first being of type state_data_type and the second of type input_data_type. Alternatively, for an aggregate that does not examine its input values, the function takes just one argument of type state_data_type. In either case the function must return a value of type state_data_type. This function takes the current state value and the current input data item, and returns the next state value.
The data type for the aggregate's state value.
The name of the final function called to compute the aggregate's result after all input data has been traversed. The function must take a single argument of type state_data_type. The return data type of the aggregate is defined as the return type of this function. If ffunc is not specified, then the ending state value is used as the aggregate's result, and the return type is state_data_type.
The initial setting for the state value. This must be a string constant in the form accepted for the data type state_data_type. If not specified, the state value starts out null.
The parameters of CREATE AGGREGATE can be written in any order, not just the order illustrated above.
See Section 31.10.
CREATE AGGREGATE is a PostgreSQL language extension. The SQL standard does not provide for user-defined aggregate functions.
when it's numbers i usually use SUM to compute totals, but when it's text you can create your own aggregate function to concatenate, the following is an example:
CREATE FUNCTION concat (text, text) RETURNS text AS $$
IF character_length($1) > 0 THEN
t = $1 ||', '|| $2;
t = $2;
$$ LANGUAGE plpgsql;
CREATE AGGREGATE pegar (
sfunc = concat,
basetype = text,
stype = text,
initcond = ''
then you can use something like:
SELECT paises.pais, pegar(ciudad) FROM ciudades JOIN paises ON ciudades.pais=paises.pais GROUP BY paises.pais
-- Or try it this way
CREATE TABLE country (country_name varchar(64) NOT NULL);
INSERT INTO country VALUES ('Afghanistan');
INSERT INTO country VALUES ('Albania');
INSERT INTO country VALUES ('Algeria');
INSERT INTO country VALUES ('Andorra');
INSERT INTO country VALUES ('Angola');
INSERT INTO country VALUES ('Anguilla');
INSERT INTO country VALUES ('Argentina');
INSERT INTO country VALUES ('Armenia');
INSERT INTO country VALUES ('Aruba');
INSERT INTO country VALUES ('Ascension');
INSERT INTO country VALUES ('Australia');
INSERT INTO country VALUES ('Austria');
-- ... etc., etc.
CREATE AGGREGATE concat (
BASETYPE = text,
SFUNC = textcat,
STYPE = text,
INITCOND = ''
SELECT TRIM(', ' FROM (SELECT CONCAT(country_name||', ') FROM COUNTRY));
-- to get a comma-separated list of country names. This allows
-- using any separator you want without hard-coding it into the
-- stored procedure.
In order to create a MIN (MAX) aggregate you'll need a function that accepts two arguments of the same type and returns the least (greatest) of them.
For example, in order to implement a MAX aggregate for the type LTree, you'll need first to create a function:
CREATE OR REPLACE FUNCTION ltree_max(ltree, ltree)
RETURNS ltree AS
SELECT CASE WHEN $1>$2 THEN $1 ELSE $2 END;
LANGUAGE 'sql' IMMUTABLE STRICT;
Then you could create the aggregate itself:
CREATE AGGREGATE max (
An aggregate multiplication function, an analog of "sum" (the same should be defined also for other numeric types):
CREATE OR REPLACE FUNCTION mul2(FLOAT,FLOAT)
RETURNS FLOAT AS '
a ALIAS FOR $1;
b ALIAS FOR $2;
' LANGUAGE plpgsql;
CREATE AGGREGATE mul (
sfunc = mul2,
basetype = FLOAT,
stype = FLOAT,
initcond = '1'
Although it's a very minor point, the example of:
SELECT TRIM(', ' FROM (SELECT CONCAT(country_name||', ') FROM COUNTRY));
can be trivially rewritten without the need for a subselect like this:
SELECT TRIM(', ' FROM CONCAT(country_name||', ')) FROM COUNTRY;
And, since this would typically only matter if you're dealing with either VERY large result sets or VERY complex joins, this practice should be contined outward - only TRIM() at the last possible moment (i.e. the outermost possible level) in order to reduce CPU overhead while computing temporary result sets.
In the single-layer example, PostgreSQL has to do almost exactly the same amount of work regardless of which way the SELECT is written. Moving scalar functions (like TRIM) to the outermost level of a SELECT involving aggregate functions helps keep the scalability toward O(n) instead of tending towards O(n!).
The difference can be seen if you EXPLAIN these two equivalent SQL statements:
select B.title, trim(' & ' from concat(A.name||' & '))
from tbl_books as B
natural join tbl_book_authors
natural join vw_authors_name A
group by B.title;
select B.title, (
select trim(' & ' from concat(A.name||' & '))
from vw_authors_name A
where A.author_id in (
from tbl_book_authors BA
where BA.book_id = B.book_id
from tbl_books B;
Oddly enough, for very small datasets (<100 rows in each table), the sub-select version is actually faster. As soon as you get any significant amount of data the join version becomes MUCH faster, at least for me. YMMV.