Re: Default setting for enable_hashagg_disk

From: Tomas Vondra <tomas(dot)vondra(at)2ndquadrant(dot)com>
To: Peter Geoghegan <pg(at)bowt(dot)ie>
Cc: Robert Haas <robertmhaas(at)gmail(dot)com>, Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us>, Jeff Davis <pgsql(at)j-davis(dot)com>, Alvaro Herrera <alvherre(at)2ndquadrant(dot)com>, David Rowley <dgrowleyml(at)gmail(dot)com>, Stephen Frost <sfrost(at)snowman(dot)net>, Andres Freund <andres(at)anarazel(dot)de>, Bruce Momjian <bruce(at)momjian(dot)us>, Justin Pryzby <pryzby(at)telsasoft(dot)com>, Melanie Plageman <melanieplageman(at)gmail(dot)com>, "pgsql-hackers(at)postgresql(dot)org" <pgsql-hackers(at)postgresql(dot)org>
Subject: Re: Default setting for enable_hashagg_disk
Date: 2020-07-26 18:34:06
Message-ID: 20200726183406.qlpunydmex4rxspl@development
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-docs pgsql-hackers

On Sat, Jul 25, 2020 at 05:13:00PM -0700, Peter Geoghegan wrote:
>On Sat, Jul 25, 2020 at 5:05 PM Tomas Vondra
><tomas(dot)vondra(at)2ndquadrant(dot)com> wrote:
>> I'm not sure what you mean by "reported memory usage doesn't reflect the
>> space used for transition state"? Surely it does include that, we've
>> built the memory accounting stuff pretty much exactly to do that.
>>
>> I think it's pretty clear what's happening - in the sorted case there's
>> only a single group getting new values at any moment, so when we decide
>> to spill we'll only add rows to that group and everything else will be
>> spilled to disk.
>
>Right.
>
>> In the unsorted case however we manage to initialize all groups in the
>> hash table, but at that point the groups are tiny an fit into work_mem.
>> As we process more and more data the groups grow, but we can't evict
>> them - at the moment we don't have that capability. So we end up
>> processing everything in memory, but significantly exceeding work_mem.
>
>work_mem was set to 200MB, which is more than the reported "Peak
>Memory Usage: 1605334kB". So either the random case significantly

That's 1.6GB, if I read it right. Which is more than 200MB ;-)

>exceeds work_mem and the "Peak Memory Usage" accounting is wrong
>(because it doesn't report this excess), or the random case really
>doesn't exceed work_mem but has a surprising advantage over the sorted
>case.
>
>--
>Peter Geoghegan

--
Tomas Vondra http://www.2ndQuadrant.com
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services

In response to

Responses

Browse pgsql-docs by date

  From Date Subject
Next Message Kevin Burke 2020-07-26 19:04:54 Re: Client parameter list omits timezone
Previous Message PG Doc comments form 2020-07-26 16:19:02 Please ignore my last message

Browse pgsql-hackers by date

  From Date Subject
Next Message Justin Pryzby 2020-07-26 18:54:27 Re: expose parallel leader in CSV and log_line_prefix
Previous Message Andrey M. Borodin 2020-07-26 17:54:35 Re: recovering from "found xmin ... from before relfrozenxid ..."