Re: Default setting for enable_hashagg_disk

From: Peter Geoghegan <pg(at)bowt(dot)ie>
To: Bruce Momjian <bruce(at)momjian(dot)us>
Cc: Amit Kapila <amit(dot)kapila16(at)gmail(dot)com>, Robert Haas <robertmhaas(at)gmail(dot)com>, Andres Freund <andres(at)anarazel(dot)de>, David Rowley <dgrowleyml(at)gmail(dot)com>, Jeff Davis <pgsql(at)j-davis(dot)com>, Justin Pryzby <pryzby(at)telsasoft(dot)com>, Melanie Plageman <melanieplageman(at)gmail(dot)com>, Tomas Vondra <tomas(dot)vondra(at)2ndquadrant(dot)com>, "pgsql-hackers(at)postgresql(dot)org" <pgsql-hackers(at)postgresql(dot)org>
Subject: Re: Default setting for enable_hashagg_disk
Date: 2020-06-29 17:36:28
Message-ID: CAH2-Wz=YEMOeXdAPwZo7uriR5KPsf_RGuMHvk3HvLDVksdrwHg@mail.gmail.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-docs pgsql-hackers

On Mon, Jun 29, 2020 at 8:29 AM Bruce Momjian <bruce(at)momjian(dot)us> wrote:
> Is this something we want to codify for all node types,
> i.e., choose a non-spill node type if we need a lot more than work_mem,
> but then let work_mem be a soft limit if we do choose it, e.g., allow
> 50% over work_mem in the executor for misestimation before spill? My
> point is, do we want to use a lower work_mem for planning and a higher
> one in the executor before spilling.

Andres said something about doing that with hash aggregate, which I
can see an argument for, but I don't think that it would make sense
with most other nodes. In particular, sorts still perform almost as
well with only a fraction of the "optimal" memory.

> My second thought is from an earlier report that spilling is very
> expensive, but smaller work_mem doesn't seem to hurt much.

It's not really about the spilling itself IMV. It's the inability to
do hash aggregation in a single pass.

You can think of hashing (say for hash join or hash aggregate) as a
strategy that consists of a logical division followed by a physical
combination. Sorting (or sort merge join, or group agg), in contrast,
consists of a physical division and logical combination. As a
consequence, it can be a huge win to do everything in memory in the
case of hash aggregate. Whereas sort-based aggregation can sometimes
be slightly faster with external sorts due to CPU caching effects, and
because an on-the-fly merge in tuplesort can output the first tuple
before the tuples are fully sorted.

> Would we
> achieve better overall performance by giving a few nodes a lot of memory
> (and not spill those), and other nodes very little, rather than having
> them all be the same size, and all spill?

If the nodes that we give more memory to use it for a hash table, then yes.

--
Peter Geoghegan

In response to

Browse pgsql-docs by date

  From Date Subject
Next Message Tomas Vondra 2020-06-29 21:22:29 Re: Default setting for enable_hashagg_disk
Previous Message Bruce Momjian 2020-06-29 17:31:40 Re: Default setting for enable_hashagg_disk

Browse pgsql-hackers by date

  From Date Subject
Next Message Tom Lane 2020-06-29 18:58:57 Re: pg_bsd_indent compiles bytecode
Previous Message Bruce Momjian 2020-06-29 17:31:40 Re: Default setting for enable_hashagg_disk