Re: Default setting for enable_hashagg_disk

From: Stephen Frost <sfrost(at)snowman(dot)net>
To: Peter Geoghegan <pg(at)bowt(dot)ie>
Cc: Jeff Davis <pgsql(at)j-davis(dot)com>, Alvaro Herrera <alvherre(at)2ndquadrant(dot)com>, Andres Freund <andres(at)anarazel(dot)de>, Bruce Momjian <bruce(at)momjian(dot)us>, Robert Haas <robertmhaas(at)gmail(dot)com>, David Rowley <dgrowleyml(at)gmail(dot)com>, Justin Pryzby <pryzby(at)telsasoft(dot)com>, Melanie Plageman <melanieplageman(at)gmail(dot)com>, Tomas Vondra <tomas(dot)vondra(at)2ndquadrant(dot)com>, "pgsql-hackers(at)postgresql(dot)org" <pgsql-hackers(at)postgresql(dot)org>
Subject: Re: Default setting for enable_hashagg_disk
Date: 2020-07-10 00:08:42
Message-ID: 20200710000842.GF12375@tamriel.snowman.net
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-docs pgsql-hackers

Greetings,

* Peter Geoghegan (pg(at)bowt(dot)ie) wrote:
> On Thu, Jul 9, 2020 at 3:58 PM Stephen Frost <sfrost(at)snowman(dot)net> wrote:
> > > That's not the only justification. The other justification is that
> > > it's generally reasonable to prefer giving hash aggregate more memory.
> >
> > Sure, and it's generally reasonably to prefer giving Sorts more memory
> > too... as long as you've got it available.
>
> The value of doing a hash aggregate all in memory is generally far
> greater than the value of doing a sort all in memory. They're just
> very different situations, owing to the fundamental laws-of-physics
> principles that apply in each case. Even to the extent that sometimes
> an external sort can actually be slightly *faster* (it's not really
> supposed to be, but it is). Try it out yourself.

I didn't, and don't, think it particularly relevant to the discussion,
but if you don't like the comparison to Sort then we could compare it to
a HashJoin instead- the point is that, yes, if you are willing to give
more memory to a given operation, it's going to go faster, and the way
users tell us that they'd like the query to use more memory is already
well-defined and understood to be through work_mem. We do not do them a
service by ignoring that.

> > > This would even be true in a world where all grouping estimates were
> > > somehow magically accurate. These two justifications coincide in a way
> > > that may seem a bit too convenient to truly be an accident of history.
> > > And if they do: I agree. It's no accident.
> >
> > I disagree that the lack of HashAgg's ability to spill to disk was
> > because, for this one particular node, we should always just give it
> > however much memory it needs, regardless of if it's anywhere near how
> > much we thought it'd need or not.
>
> This is a straw man.

It's really not- the system has been quite intentionally designed, and
documented, to work within the constraints given to it (even if they're
not very well defined, this is still the case) and this particular node
didn't. That isn't a feature.

> It's possible to give hash agg an amount of memory that exceeds
> work_mem, but is less than infinity. That's more or less what I
> propose to enable by inventing a new hash_mem GUC, in fact.

We already have a GUC that we've documented and explained to users that
is there specifically to control this exact thing, and that's work_mem.
How would we document this? "work_mem is used to control the amount of
memory a given node can consider using- oh, except for this one
particular kind of node called a HashAgg, then you have to use this
other variable; no, there's no other node-specific tunable like that,
and no, you can't control how much memory is used for a given HashAgg or
for a given node". Sure, I'm going over the top here to show my point,
but I don't think I'm far from the mark on how this would look.

> There is also the separate "escape hatch" idea that David Rowley
> proposed, that I consider to be a plausible way of resolving the
> problem. That wouldn't "always give it [hash agg] however much memory
> it asks for", either. It would only do that when the setting indicated
> that hash agg should be given however much memory it asks for.

Where's the setting for HashJoin or for Sort, to do the same thing?
Would we consider it sensible to set everything to "use as much memory
as you want?" I disagree with this notion that HashAgg is so very
special that it must have an independent set of tunables like this.

> > I disagree that we were reasonably happy with this bug or that it
> > somehow makes sense to retain it.
>
> While we're far from resolving this open item, I think that you'll
> find that most people agree that it's reasonable to think of hash agg
> as special -- at least in some contexts. The central disagreement
> seems to be on the question of how to maintain some kind of continuity
> with the old behavior, how ambitious our approach should be in
> Postgres 13, etc.

The old behavior was buggy and we are providing quite enough continuity
through the fact that we've got major versions which will be maintained
for the next 5 years that folks can run as they test out newer versions.
Inventing hacks to preserve bug-compatibility across major versions is
not a good direction to go in.

> > > Yes, we were concerned about the risk of OOM for many years, but it
> > > was considered a risk worth taking. We knew what the trade-off was. We
> > > never quite admitted it, but what does it matter?
> >
> > This is not some well designed feature of HashAgg that had a lot of
> > thought put into it, whereby the community agreed that we should just
> > let it be and hope no one noticed or got bit by it- I certainly have
> > managed to kill servers by a HashAgg gone bad and I seriously doubt I'm
> > alone in that.
>
> I was talking about the evolutionary pressures that led to this
> curious state of affairs, where hashagg's overuse of memory was often
> actually quite desirable. I understand that it also sometimes causes
> OOMs, and that OOMs are bad. Both beliefs are compatible, just as a
> design that takes both into account is possible.

I don't agree that evolution of the system led us to have a HashAgg node
that overused memory- certainly it wasn't intentional as it only
happened when we thought it wouldn't based on what information we had at
plan time, a fact that has certainly led a lot of folks to increase
work_mem to get the HashAgg that they wanted and who, most likely, won't
actually end up being hit by this at all.

> If it isn't practical to do that in Postgres 13, then an escape hatch
> is highly desirable, if not essential.

We have a parameter which already drives this and which users are
welcome to (and quite often do) tune. I disagree that anything further
is either essential or particularly desirable.

I'm really rather astounded at the direction this has been going in.

Thanks,

Stephen

In response to

Responses

Browse pgsql-docs by date

  From Date Subject
Next Message Justin Pryzby 2020-07-10 00:18:52 Re: Default setting for enable_hashagg_disk
Previous Message David G. Johnston 2020-07-09 23:57:22 Re: Default setting for enable_hashagg_disk

Browse pgsql-hackers by date

  From Date Subject
Next Message Thomas Munro 2020-07-10 00:15:44 Re: Collation versioning
Previous Message Alvaro Herrera 2020-07-10 00:08:38 Re: Stale external URL in doc?