Re: Combining Aggregates

From: David Rowley <david(dot)rowley(at)2ndquadrant(dot)com>
To: Tomas Vondra <tomas(dot)vondra(at)2ndquadrant(dot)com>
Cc: PostgreSQL-development <pgsql-hackers(at)postgresql(dot)org>
Subject: Re: Combining Aggregates
Date: 2016-01-19 05:16:26
Message-ID: CAKJS1f8=RzEE+3Lc_oR5+9gY759xtVtFCvCRNH+PZfjuLeHnkg@mail.gmail.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-hackers

On 19 January 2016 at 18:04, Tomas Vondra <tomas(dot)vondra(at)2ndquadrant(dot)com>
wrote:

> Hi,
>
> On 01/19/2016 05:00 AM, David Rowley wrote:
>
>> On 19 January 2016 at 06:03, Pavel Stehule <pavel(dot)stehule(at)gmail(dot)com
>> <mailto:pavel(dot)stehule(at)gmail(dot)com>> wrote:
>>
>> ...
>
>>
>> It is strange, why hashaggregate is too slow?
>>
>>
>> Good question. I looked at this and found my VM was swapping like crazy.
>> Upon investigation it appears that's because, since the patch creates a
>> memory context per aggregated group, and in this case I've got 1 million
>> of them, it means we create 1 million context, which are
>> ALLOCSET_SMALL_INITSIZE (1KB) in size, which means about 1GB of memory,
>> which is more than my VM likes.
>>
>
> Really? Where do we create the memory context? IIRC string_agg uses the
> aggcontext directly, and indeed that's what I see in string_agg_transfn and
> makeStringAggState.
>
>
Yeah, all this is talk is relating to Robert's expandedstring-v1.patch
which changes string_agg to use text and expanded-objects. This also means
that a memory context is created per group, which is rather a big overhead.

--
David Rowley http://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Training & Services

In response to

Browse pgsql-hackers by date

  From Date Subject
Next Message Tomas Vondra 2016-01-19 05:22:08 Re: Combining Aggregates
Previous Message Tomas Vondra 2016-01-19 05:11:40 Re: Combining Aggregates