Re: Large Scale Aggregation (HashAgg Enhancement)

From: Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us>
To: Simon Riggs <simon(at)2ndquadrant(dot)com>
Cc: Rod Taylor <pg(at)rbt(dot)ca>, PostgreSQL Development <pgsql-hackers(at)postgresql(dot)org>
Subject: Re: Large Scale Aggregation (HashAgg Enhancement)
Date: 2006-01-16 19:43:09
Message-ID: 17440.1137440589@sss.pgh.pa.us
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-hackers

Simon Riggs <simon(at)2ndquadrant(dot)com> writes:
> For HJ we write each outer tuple to its own file-per-batch in the order
> they arrive. Reading them back in preserves the original ordering. So
> yes, caution required, but I see no difficulty, just reworking the HJ
> code (nodeHashjoin and nodeHash). What else do you see?

With dynamic adjustment of the hash partitioning, some tuples will go
through multiple temp files before they ultimately get eaten, and
different tuples destined for the same aggregate may take different
paths through the temp files depending on when they arrive. It's not
immediately obvious that ordering is preserved when that happens.
I think it can be made to work but it may take different management of
the temp files than hashjoin uses. (Worst case, we could use just a
single temp file for all unprocessed tuples, but this would result in
extra I/O.)

regards, tom lane

In response to

Responses

Browse pgsql-hackers by date

  From Date Subject
Next Message Bruce Momjian 2006-01-16 19:54:32 Re: [HACKERS] message for constraint
Previous Message Bruce Momjian 2006-01-16 19:33:30 Re: PostgreSQL win32 & NT4