Re: Default setting for enable_hashagg_disk

From: Tomas Vondra <tomas(dot)vondra(at)2ndquadrant(dot)com>
To: Andres Freund <andres(at)anarazel(dot)de>
Cc: Robert Haas <robertmhaas(at)gmail(dot)com>, David Rowley <dgrowleyml(at)gmail(dot)com>, Jeff Davis <pgsql(at)j-davis(dot)com>, Justin Pryzby <pryzby(at)telsasoft(dot)com>, Melanie Plageman <melanieplageman(at)gmail(dot)com>, "pgsql-hackers(at)postgresql(dot)org" <pgsql-hackers(at)postgresql(dot)org>
Subject: Re: Default setting for enable_hashagg_disk
Date: 2020-06-24 21:02:10
Message-ID: 20200624210210.no7l4vihsw4yfvoz@development
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-docs pgsql-hackers

On Wed, Jun 24, 2020 at 12:36:24PM -0700, Andres Freund wrote:
>Hi,
>
>On 2020-06-24 15:28:47 -0400, Robert Haas wrote:
>> On Wed, Jun 24, 2020 at 3:14 PM Andres Freund <andres(at)anarazel(dot)de> wrote:
>> > FWIW, my gut feeling is that we'll end up have to separate the
>> > "execution time" spilling from using plain work mem, because it'll
>> > trigger spilling too often. E.g. if the plan isn't expected to spill,
>> > only spill at 10 x work_mem or something like that. Or we'll need
>> > better management of temp file data when there's plenty memory
>> > available.
>>
>> So, I don't think we can wire in a constant like 10x. That's really
>> unprincipled and I think it's a bad idea. What we could do, though, is
>> replace the existing Boolean-valued GUC with a new GUC that controls
>> the size at which the aggregate spills. The default could be -1,
>> meaning work_mem, but a user could configure a larger value if desired
>> (presumably, we would just treat a value smaller than work_mem as
>> work_mem, and document the same).
>
>To be clear, I wasn't actually thinking of hard-coding 10x, but having a
>config option that specifies a factor of work_mem. A factor seems better
>because it'll work reasonably for different values of work_mem, whereas
>a concrete size wouldn't.
>

I'm not quite convinced we need/should introduce a new memory limit.
It's true keping it equal to work_mem by default makes this less of an
issue, but it's still another moving part the users will need to learn
how to use.

But if we do introduce a new limit, I very much think it should be a
plain limit, not a factor. That just makes it even more complicated, and
we don't have any such limit yet.

>
>> I think that's actually pretty appealing. Separating the memory we
>> plan to use from the memory we're willing to use before spilling seems
>> like a good idea in general, and I think we should probably also do it
>> in other places - like sorts.
>
>Indeed. And then perhaps we could eventually add some reporting /
>monitoring infrastructure for the cases where plan time and execution
>time memory estimate/usage widely differs.
>

I wouldn't mind something like that in general - not just for hashagg,
but for various other nodes.

regards

--
Tomas Vondra http://www.2ndQuadrant.com
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services

In response to

Responses

Browse pgsql-docs by date

  From Date Subject
Next Message Bruce Momjian 2020-06-24 23:15:17 Re: Default setting for enable_hashagg_disk
Previous Message Andres Freund 2020-06-24 19:36:24 Re: Default setting for enable_hashagg_disk

Browse pgsql-hackers by date

  From Date Subject
Next Message Alvaro Herrera 2020-06-24 21:08:31 Re: Allow CURRENT_ROLE in GRANTED BY
Previous Message Peter Geoghegan 2020-06-24 20:02:20 Re: xid wraparound danger due to INDEX_CLEANUP false