Re: Large Scale Aggregation (HashAgg Enhancement)

From: Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us>
To: Simon Riggs <simon(at)2ndquadrant(dot)com>
Cc: Rod Taylor <pg(at)rbt(dot)ca>, PostgreSQL Development <pgsql-hackers(at)postgresql(dot)org>
Subject: Re: Large Scale Aggregation (HashAgg Enhancement)
Date: 2006-01-16 17:36:19
Message-ID: 9602.1137432979@sss.pgh.pa.us
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-hackers

Simon Riggs <simon(at)2ndquadrant(dot)com> writes:
> On Mon, 2006-01-16 at 00:07 -0500, Rod Taylor wrote:
>> A couple of days ago I found myself wanting to aggregate 3 Billion
>> tuples down to 100 Million tuples based on an integer key with six
>> integer values -- six sum()'s.

> There is already hash table overflow (spill to disk) logic in HashJoins,
> so this should be possible by reusing that code for HashAggs. That's on
> my todo list, but I'd welcome any assistance.

Yeah, I proposed something similar awhile back in conjunction with
fixing the spill logic for hash joins (which was always there, but was
not adaptive until recently). I got the join part done but got
distracted before fixing HashAgg :-(

In principle, you just reduce the range of currently-in-memory hash
codes whenever you run low on memory. The so-far-accumulated working
state for aggregates that are not in the range anymore goes into a temp
file, and subsequently any incoming tuples that hash outside the range
go into another temp file. After you've completed the scan, you
finalize and emit the aggregates that are still in memory, then you pick
up the first set of dropped aggregates, rescan the associated "TODO"
file of unprocessed tuples, lather rinse repeat till done.

The tricky part is to preserve the existing guarantee that tuples are
merged into their aggregate in arrival order. (This does not matter for
the standard aggregates but it definitely does for custom aggregates,
and there will be unhappy villagers appearing on our doorsteps if we
break it.) I think this can work correctly under the above sketch but
it needs to be verified. It might require different handling of the
TODO files than what hashjoin does.

regards, tom lane

In response to

Responses

Browse pgsql-hackers by date

  From Date Subject
Next Message Tom Lane 2006-01-16 17:57:46 Re: [PATCH] Better way to check for getaddrinfo function.
Previous Message Tom Lane 2006-01-16 17:07:44 Re: ScanKey representation for RowCompare index conditions