Re: tweaking NTUP_PER_BUCKET

From: Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us>
To: Tomas Vondra <tv(at)fuzzy(dot)cz>
Cc: Stephen Frost <sfrost(at)snowman(dot)net>, pgsql-hackers(at)postgresql(dot)org
Subject: Re: tweaking NTUP_PER_BUCKET
Date: 2014-07-19 18:24:00
Message-ID: 521.1405794240@sss.pgh.pa.us
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-hackers

Tomas Vondra <tv(at)fuzzy(dot)cz> writes:
> I've reviewed the two test cases mentioned here, and sadly there's
> nothing that can be 'fixed' by this patch. The problem here lies in the
> planning stage, which decides to hash the large table - we can't fix
> that in the executor.

We've heard a couple reports before of the planner deciding to hash a
larger table rather than a smaller one. The only reason I can think of
for that is if the smaller table has many more duplicates, so that the
planner thinks the executor might end up traversing long hash chains.
The planner's estimates could easily be off in this area, of course.
estimate_hash_bucketsize() is the likely culprit if it's wrong.

Which test case are you seeing this in, exactly?

regards, tom lane

In response to

Responses

Browse pgsql-hackers by date

  From Date Subject
Next Message Tomas Vondra 2014-07-19 18:24:08 Re: tweaking NTUP_PER_BUCKET
Previous Message Tomas Vondra 2014-07-19 18:15:28 Re: tweaking NTUP_PER_BUCKET