Re: WIP: bloom filter in Hash Joins with batches

From: Tomas Vondra <tomas(dot)vondra(at)2ndquadrant(dot)com>
To: David Rowley <david(dot)rowley(at)2ndquadrant(dot)com>
Cc: "Shulgin, Oleksandr" <oleksandr(dot)shulgin(at)zalando(dot)de>, PostgreSQL Hackers <pgsql-hackers(at)postgresql(dot)org>
Subject: Re: WIP: bloom filter in Hash Joins with batches
Date: 2015-12-28 10:23:08
Message-ID: 56810D8C.3020309@2ndquadrant.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-hackers

On 12/28/2015 03:15 AM, David Rowley wrote:
> On 18 December 2015 at 04:34, Tomas Vondra <tomas(dot)vondra(at)2ndquadrant(dot)com
> <mailto:tomas(dot)vondra(at)2ndquadrant(dot)com>> wrote:
>
> I think ultimately we'll need to measure the false positive rate, so
> that we can use it to dynamically disable the bloom filter if it
> gets inefficient. Also maybe put some of that into EXPLAIN ANALYZE.
>
>
> I'm not so convinced that will be a good idea. What if the filter does
> not help much to start with, we disable it because of that, then we get
> some different probe values later in the scan which the bloom filter
> would have helped to eliminate earlier.

Yes, that's true. This might happen quite easily for example when then
outer table is sorted - in that case what we'll see are (possibly long)
sequences of the same value, which might bias the decision quite easily.

I don't know how to fix this perfectly (if at all possible). But maybe
we don't really need to do that. The overall goal is to improve the
overall performance, and this check (disabling the bloom filter) is
meant to limit the loss when the filter returns "true" for most values
(making it somewhat pointless).

The costs however are asymmetric - once we build the hash table (and
build the filter), the cost for dropping the bloom filter is constant,
as we simply loose the time we spent building it. But the cost for
continuing to use of inefficient filter is O(N) where N is the number of
lookups (i.e. rows of outer table). So it's probably better to drop the
filter a bit too early and sacrifice the small amount of time we spent
building it, so that we have a change of not being much slower than the
current implementation.

That is not to say we can't come up with better solutions - for example
we can count (or estimate) the number of distinct hash values we've
seen, and only do the decision once we see enough of them. For example
we could use HyperLogLog to do that, or simply see the number of times a
value changed (hash value not equal to the previous value). That should
be better than simply waiting to X lookups, but I'm sure it's still
possible to come up with counter-examples.

>
> Maybe it would be better to, once the filter is built, simply count the
> number of 1 bits and only use the filter if there's less than
> <threshold> 1 bits compared to the size of the filter in bits. There's
> functionality in bms_num_members() to do this, and there's
> also __builtin_popcount() in newer version of GCC, which we could have
> some wrapper around, perhaps.

I don't really see how this is relevant with the previous point. The
number of 1s in the bitmap can tell you the false positive rate for the
bloom filter, not what fraction of lookups will be answered with "true".

So while this needs to be watched, so that we stop using the bloom
filter when it gets too full (e.g. due to under-estimate), it's
completely unrelated to the previous issue.

regards

--
Tomas Vondra http://www.2ndQuadrant.com
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services

In response to

Responses

Browse pgsql-hackers by date

  From Date Subject
Next Message David Rowley 2015-12-28 10:26:04 Fix compiler warnings in Cube Extension
Previous Message Shulgin, Oleksandr 2015-12-28 10:09:56 Re: pg_hba_lookup function to get all matching pg_hba.conf entries