Re: A better way than tweaking NTUP_PER_BUCKET

From: Heikki Linnakangas <hlinnakangas(at)vmware(dot)com>
To: Simon Riggs <simon(at)2ndQuadrant(dot)com>
Cc: Stephen Frost <sfrost(at)snowman(dot)net>, PostgreSQL-development <pgsql-hackers(at)postgresql(dot)org>
Subject: Re: A better way than tweaking NTUP_PER_BUCKET
Date: 2013-06-22 21:08:46
Message-ID: 51C6125E.3090806@vmware.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-hackers

On 22.06.2013 19:19, Simon Riggs wrote:
> So I think that (2) is the best route: Given that we know with much
> better certainty the number of rows in the scanned-relation, we should
> be able to examine our hash table after it has been built and decide
> whether it would be cheaper to rebuild the hash table with the right
> number of buckets, or continue processing with what we have now. Which
> is roughly what Heikki proposed already, in January.

Back in January, I wrote a quick patch to experiment with rehashing when
the hash table becomes too full. It was too late to make it into 9.3 so
I didn't pursue it further back then, but IIRC it worked. If we have the
capability to rehash, the accuracy of the initial guess becomes much
less important.

- Heikki

Attachment Content-Type Size
rehash-hashjoin-1.patch text/x-diff 5.8 KB

In response to

Responses

Browse pgsql-hackers by date

  From Date Subject
Next Message Simon Riggs 2013-06-22 22:48:45 Re: A better way than tweaking NTUP_PER_BUCKET
Previous Message Fabien COELHO 2013-06-22 20:51:48 Re: [PATCH] add --progress option to pgbench (submission 3)