Re: WIP: bloom filter in Hash Joins with batches

Поиск
Список
Период
Сортировка
От Tomas Vondra
Тема Re: WIP: bloom filter in Hash Joins with batches
Дата
Msg-id 56810D8C.3020309@2ndquadrant.com
обсуждение исходный текст
Ответ на Re: WIP: bloom filter in Hash Joins with batches  (David Rowley <david.rowley@2ndquadrant.com>)
Ответы Re: WIP: bloom filter in Hash Joins with batches  (David Rowley <david.rowley@2ndquadrant.com>)
Список pgsql-hackers
On 12/28/2015 03:15 AM, David Rowley wrote:
> On 18 December 2015 at 04:34, Tomas Vondra <tomas.vondra@2ndquadrant.com
> <mailto:tomas.vondra@2ndquadrant.com>> wrote:
>
>     I think ultimately we'll need to measure the false positive rate, so
>     that we can use it to dynamically disable the bloom filter if it
>     gets inefficient. Also maybe put some of that into EXPLAIN ANALYZE.
>
>
> I'm not so convinced that will be a good idea. What if the filter does
> not help much to start with, we disable it because of that, then we get
> some different probe values later in the scan which the bloom filter
> would have helped to eliminate earlier.

Yes, that's true. This might happen quite easily for example when then 
outer table is sorted - in that case what we'll see are (possibly long) 
sequences of the same value, which might bias the decision quite easily.

I don't know how to fix this perfectly (if at all possible). But maybe 
we don't really need to do that. The overall goal is to improve the 
overall performance, and this check (disabling the bloom filter) is 
meant to limit the loss when the filter returns "true" for most values 
(making it somewhat pointless).

The costs however are asymmetric - once we build the hash table (and 
build the filter), the cost for dropping the bloom filter is constant, 
as we simply loose the time we spent building it. But the cost for 
continuing to use of inefficient filter is O(N) where N is the number of 
lookups (i.e. rows of outer table). So it's probably better to drop the 
filter a bit too early and sacrifice the small amount of time we spent 
building it, so that we have a change of not being much slower than the 
current implementation.

That is not to say we can't come up with better solutions - for example 
we can count (or estimate) the number of distinct hash values we've 
seen, and only do the decision once we see enough of them. For example 
we could use HyperLogLog to do that, or simply see the number of times a 
value changed (hash value not equal to the previous value). That should 
be better than simply waiting to X lookups, but I'm sure it's still 
possible to come up with counter-examples.

>
> Maybe it would be better to, once the filter is built, simply count the
> number of 1 bits and only use the filter if there's less than
> <threshold> 1 bits compared to the size of the filter in bits. There's
> functionality in bms_num_members() to do this, and there's
> also __builtin_popcount() in newer version of GCC, which we could have
> some wrapper around, perhaps.

I don't really see how this is relevant with the previous point. The 
number of 1s in the bitmap can tell you the false positive rate for the 
bloom filter, not what fraction of lookups will be answered with "true".

So while this needs to be watched, so that we stop using the bloom 
filter when it gets too full (e.g. due to under-estimate), it's 
completely unrelated to the previous issue.

regards

-- 
Tomas Vondra                  http://www.2ndQuadrant.com
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services



В списке pgsql-hackers по дате отправления:

Предыдущее
От: "Shulgin, Oleksandr"
Дата:
Сообщение: Re: pg_hba_lookup function to get all matching pg_hba.conf entries
Следующее
От: David Rowley
Дата:
Сообщение: Fix compiler warnings in Cube Extension