Обсуждение: Bloom Filter lookup for hash joins

Поиск
Список
Период
Сортировка

Bloom Filter lookup for hash joins

От
Atri Sharma
Дата:
Hi All,

I have been researching bloom filters and discussed it on IRC with
RhodiumToad and David Fetter, and they pointed me to the various
places that could potentially have bloom filters, apart from the
places that already have them currently.

I have been reading the current implementation of hash joins, and in
ExecScanHashBucket, which I understand is the actual lookup function,
we could potentially look at a bloom filter per bucket. Instead of
actually looking up each hash value for the outer relation, we could
just check the corresponding bloom filter for that bucket, and if we
get a positive, then lookup the actual values i.e. continue with our
current behaviour (since we could be looking at a false positive).

This doesn't involve a lot of new logic. We need to add a bloom filter
in HashJoinState and set it when the hash table is being made. Also,
one thing we need to look at is the set of hash functions being used
for the bloom filters. This can be a point of further discussion.

The major potential benefit we could gain is that bloom filters never
give false negatives. So, we could save a lot of lookup where the
corresponding bloom filter gives a negative.

This approach can also be particularly useful for hash anti joins,
where we look for negatives. Since bloom filters can easily be stored
and processed, by straight bit operations, we could be looking at a
lot of saving here.

I am not sure if we already do something like this. Please direct me
to the relevant parts in the code if we already do.

Regards,

Atri

--
Regards,

Atri
l'apprenant



Re: Bloom Filter lookup for hash joins

От
Pavel Stehule
Дата:
Hello

it looks interesting - please, try to send some performance results,

Regards

Pavel

2013/6/26 Atri Sharma <atri.jiit@gmail.com>:
> Hi All,
>
> I have been researching bloom filters and discussed it on IRC with
> RhodiumToad and David Fetter, and they pointed me to the various
> places that could potentially have bloom filters, apart from the
> places that already have them currently.
>
> I have been reading the current implementation of hash joins, and in
> ExecScanHashBucket, which I understand is the actual lookup function,
> we could potentially look at a bloom filter per bucket. Instead of
> actually looking up each hash value for the outer relation, we could
> just check the corresponding bloom filter for that bucket, and if we
> get a positive, then lookup the actual values i.e. continue with our
> current behaviour (since we could be looking at a false positive).
>
> This doesn't involve a lot of new logic. We need to add a bloom filter
> in HashJoinState and set it when the hash table is being made. Also,
> one thing we need to look at is the set of hash functions being used
> for the bloom filters. This can be a point of further discussion.
>
> The major potential benefit we could gain is that bloom filters never
> give false negatives. So, we could save a lot of lookup where the
> corresponding bloom filter gives a negative.
>
> This approach can also be particularly useful for hash anti joins,
> where we look for negatives. Since bloom filters can easily be stored
> and processed, by straight bit operations, we could be looking at a
> lot of saving here.
>
> I am not sure if we already do something like this. Please direct me
> to the relevant parts in the code if we already do.
>
> Regards,
>
> Atri
>
> --
> Regards,
>
> Atri
> l'apprenant
>
>
> --
> Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
> To make changes to your subscription:
> http://www.postgresql.org/mailpref/pgsql-hackers



Re: Bloom Filter lookup for hash joins

От
Atri Sharma
Дата:
On Wed, Jun 26, 2013 at 2:09 PM, Pavel Stehule <pavel.stehule@gmail.com> wrote:
> Hello
>
> it looks interesting - please, try to send some performance results,
>
> Regards
>
> Pavel

Hi,

Sure. I will think more about it and put up a design on the list soon.

My current aim would be to work on hash joins. If it works well for
us, we can look for hash anti joins as well.

Regards,

Atri


--
Regards,

Atri
l'apprenant



Re: Bloom Filter lookup for hash joins

От
Ants Aasma
Дата:
On Wed, Jun 26, 2013 at 9:46 AM, Atri Sharma <atri.jiit@gmail.com> wrote:
> I have been researching bloom filters and discussed it on IRC with
> RhodiumToad and David Fetter, and they pointed me to the various
> places that could potentially have bloom filters, apart from the
> places that already have them currently.

One idea that I had was to use them to do CLOG lookups from smaller
datastructures. You could store the list of aborted xids in bloom
filters. When a xid is not found in the filter, then it is known to
have committed, if the filter returns true, then you have to check the
real CLOG. This would achieve for example a 1:8 compression ratio at
99% commit rate and 1:160'000 false positive rate, or 1:16 compression
ratio at 1:400 false positive rate. Nothing too exciting, so I didn't
really develop the idea any further.

> I have been reading the current implementation of hash joins, and in
> ExecScanHashBucket, which I understand is the actual lookup function,
> we could potentially look at a bloom filter per bucket. Instead of
> actually looking up each hash value for the outer relation, we could
> just check the corresponding bloom filter for that bucket, and if we
> get a positive, then lookup the actual values i.e. continue with our
> current behaviour (since we could be looking at a false positive).

The problem here is that if the hash table is in memory, doing a hash
table lookup directly is likely to be cheaper than a bloom filter
lookup, even if the bloom filter fits into the processor cache and the
hash table doesn't (10 last level cache hits is slower than one cache
miss). Bloom filter will help when its not feasible to use an actual
hash table (lots of large keys), the real lookup is slow (e.g. an
index lookup), you are doing a lot of lookups to amortize the
construction cost and the condition is expected to be selective (most
lookups give a negative). There might be some dataware house types of
queries where it would help, but it seems like an awfully narrow use
case with a potential for performance regressions when the planner has
a bad estimate.

Regards,
Ants Aasma
--
Cybertec Schönig & Schönig GmbH
Gröhrmühlgasse 26
A-2700 Wiener Neustadt
Web: http://www.postgresql-support.de



Re: Bloom Filter lookup for hash joins

От
Atri Sharma
Дата:
> One idea that I had was to use them to do CLOG lookups from smaller
> datastructures. You could store the list of aborted xids in bloom
> filters. When a xid is not found in the filter, then it is known to
> have committed, if the filter returns true, then you have to check the
> real CLOG. This would achieve for example a 1:8 compression ratio at
> 99% commit rate and 1:160'000 false positive rate, or 1:16 compression
> ratio at 1:400 false positive rate. Nothing too exciting, so I didn't
> really develop the idea any further.
>
Right.


>
> The problem here is that if the hash table is in memory, doing a hash
> table lookup directly is likely to be cheaper than a bloom filter
> lookup, even if the bloom filter fits into the processor cache and the
> hash table doesn't (10 last level cache hits is slower than one cache
> miss). Bloom filter will help when its not feasible to use an actual
> hash table (lots of large keys), the real lookup is slow (e.g. an
> index lookup), you are doing a lot of lookups to amortize the
> construction cost and the condition is expected to be selective (most
> lookups give a negative). There might be some dataware house types of
> queries where it would help, but it seems like an awfully narrow use
> case with a potential for performance regressions when the planner has
> a bad estimate.

Ok, sounds good. Cant we use bloom filters for the case where the hash
table doesnt fit in memory? Specifically, when reading from disk is
inevitable for accessing the hash table, we can use bloom filters for
deciding which keys to actually read from disk.Thus ,we save I/O costs
which we would have incurred when reading the keys which are not the
ones we want and bloom filter gives us a negative.

Regards,

Atri


--
Regards,

Atri
l'apprenant



Re: Bloom Filter lookup for hash joins

От
Simon Riggs
Дата:
On 26 June 2013 07:46, Atri Sharma <atri.jiit@gmail.com> wrote:
 
I have been researching bloom filters and discussed it on IRC with
RhodiumToad and David Fetter, and they pointed me to the various
places that could potentially have bloom filters, apart from the
places that already have them currently.

I have been reading the current implementation of hash joins, and in
ExecScanHashBucket, which I understand is the actual lookup function,
we could potentially look at a bloom filter per bucket. Instead of
actually looking up each hash value for the outer relation, we could
just check the corresponding bloom filter for that bucket, and if we
get a positive, then lookup the actual values i.e. continue with our
current behaviour (since we could be looking at a false positive).

Exactly this was suggested by me on the NTUP_PER_BUCKET thread last week.

Probably good idea to join in there also. 
 
--
 Simon Riggs                   http://www.2ndQuadrant.com/
 PostgreSQL Development, 24x7 Support, Training & Services

Re: Bloom Filter lookup for hash joins

От
Atri Sharma
Дата:
On Wed, Jun 26, 2013 at 5:47 PM, Simon Riggs <simon@2ndquadrant.com> wrote:
> On 26 June 2013 07:46, Atri Sharma <atri.jiit@gmail.com> wrote:
>
>>
>> I have been researching bloom filters and discussed it on IRC with
>> RhodiumToad and David Fetter, and they pointed me to the various
>> places that could potentially have bloom filters, apart from the
>> places that already have them currently.
>>
>> I have been reading the current implementation of hash joins, and in
>> ExecScanHashBucket, which I understand is the actual lookup function,
>> we could potentially look at a bloom filter per bucket. Instead of
>> actually looking up each hash value for the outer relation, we could
>> just check the corresponding bloom filter for that bucket, and if we
>> get a positive, then lookup the actual values i.e. continue with our
>> current behaviour (since we could be looking at a false positive).
>
>
> Exactly this was suggested by me on the NTUP_PER_BUCKET thread last week.
>
> Probably good idea to join in there also.
>

Great.I would love to assist you in implementing this, if possible.

Regards,

Atri


--
Regards,

Atri
l'apprenant



Re: Bloom Filter lookup for hash joins

От
Tom Lane
Дата:
Ants Aasma <ants@cybertec.at> writes:
> On Wed, Jun 26, 2013 at 9:46 AM, Atri Sharma <atri.jiit@gmail.com> wrote:
>> I have been reading the current implementation of hash joins, and in
>> ExecScanHashBucket, which I understand is the actual lookup function,
>> we could potentially look at a bloom filter per bucket. Instead of
>> actually looking up each hash value for the outer relation, we could
>> just check the corresponding bloom filter for that bucket, and if we
>> get a positive, then lookup the actual values i.e. continue with our
>> current behaviour (since we could be looking at a false positive).

> The problem here is that if the hash table is in memory, doing a hash
> table lookup directly is likely to be cheaper than a bloom filter
> lookup,

Yeah.  Given the plan to reduce NTUP_PER_BUCKET to 1, it's hard to see
how adding a Bloom filter phase could be anything except overhead.  Even
with the current average bucket length, it doesn't sound very promising.
        regards, tom lane



Re: Bloom Filter lookup for hash joins

От
Jeff Janes
Дата:
On Wed, Jun 26, 2013 at 5:01 AM, Atri Sharma <atri.jiit@gmail.com> wrote:

> The problem here is that if the hash table is in memory, doing a hash
> table lookup directly is likely to be cheaper than a bloom filter
> lookup, even if the bloom filter fits into the processor cache and the
> hash table doesn't (10 last level cache hits is slower than one cache
> miss). Bloom filter will help when its not feasible to use an actual
> hash table (lots of large keys), the real lookup is slow (e.g. an
> index lookup), you are doing a lot of lookups to amortize the
> construction cost and the condition is expected to be selective (most
> lookups give a negative). There might be some dataware house types of
> queries where it would help, but it seems like an awfully narrow use
> case with a potential for performance regressions when the planner has
> a bad estimate.

Ok, sounds good. Cant we use bloom filters for the case where the hash
table doesnt fit in memory? Specifically, when reading from disk is
inevitable for accessing the hash table, we can use bloom filters for
deciding which keys to actually read from disk.

I don't think that sounds all that promising.  When the hash table does not fit in memory, it is either partitioned into multiple passes, each of which do fit in memory, or it chooses a different plan altogether.  Do we know under what conditions a Bloom filter would be superior to those options, and could we reliably detect those conditions?

Cheers,

Jeff

Re: Bloom Filter lookup for hash joins

От
Atri Sharma
Дата:
On Thu, Jun 27, 2013 at 12:01 AM, Jeff Janes <jeff.janes@gmail.com> wrote:

> I don't think that sounds all that promising.  When the hash table does not
> fit in memory, it is either partitioned into multiple passes, each of which
> do fit in memory, or it chooses a different plan altogether.

Yeah, my point is, we could potentially be looking at not bringing in
all of the memory blocks for the hash tables. Even if they are
processed in parts, we still are looking at all of them.

Why not do a local bloom filter lookup, and never bring in the tuples
that are given negative the bloom filter? This could save us some
reads anyhow.

> Do we know
> under what conditions a Bloom filter would be superior to those options, and
> could we reliably detect those conditions?

Yes, this needs to be researched.

Regards,

Atri




--
Regards,

Atri
l'apprenant



Re: Bloom Filter lookup for hash joins

От
Greg Stark
Дата:

This exact idea was discussed a whole back. I think it was even implemented.

The problem Tom raised at the time is that the memory usage of the bloom filter implies smaller or less efficient hash table. It's difficult to determine whether you're coming out ahead or behind.

I think it should be possible to figure this out though. Bloom fillers have well understood math for the error rate given the size and number of hash functions (and please read up on it and implement the optimal combination for the target error rate, not just an wag) and it should be possible to determine the resulting advantage.

Measuring the cost of the memory usage is harder but some empirical results should give some idea.  I expect the result to be wildly uneven one way or the other so hopefully it doesn't matter of its not perfect. If it's close then probably is not worth doing anyways.

I would suggest looking up the archives of the previous discussion. You mind find the patch still usable. Iirc there's been no major changes to the hash join code.

--
greg

On Jun 26, 2013 7:31 PM, "Jeff Janes" <jeff.janes@gmail.com> wrote:
On Wed, Jun 26, 2013 at 5:01 AM, Atri Sharma <atri.jiit@gmail.com> wrote:

> The problem here is that if the hash table is in memory, doing a hash
> table lookup directly is likely to be cheaper than a bloom filter
> lookup, even if the bloom filter fits into the processor cache and the
> hash table doesn't (10 last level cache hits is slower than one cache
> miss). Bloom filter will help when its not feasible to use an actual
> hash table (lots of large keys), the real lookup is slow (e.g. an
> index lookup), you are doing a lot of lookups to amortize the
> construction cost and the condition is expected to be selective (most
> lookups give a negative). There might be some dataware house types of
> queries where it would help, but it seems like an awfully narrow use
> case with a potential for performance regressions when the planner has
> a bad estimate.

Ok, sounds good. Cant we use bloom filters for the case where the hash
table doesnt fit in memory? Specifically, when reading from disk is
inevitable for accessing the hash table, we can use bloom filters for
deciding which keys to actually read from disk.

I don't think that sounds all that promising.  When the hash table does not fit in memory, it is either partitioned into multiple passes, each of which do fit in memory, or it chooses a different plan altogether.  Do we know under what conditions a Bloom filter would be superior to those options, and could we reliably detect those conditions?

Cheers,

Jeff

Re: Bloom Filter lookup for hash joins

От
Atri Sharma
Дата:
On Tue, Jul 23, 2013 at 7:32 PM, Greg Stark <stark@mit.edu> wrote:
> This exact idea was discussed a whole back. I think it was even implemented.
>
> The problem Tom raised at the time is that the memory usage of the bloom
> filter implies smaller or less efficient hash table. It's difficult to
> determine whether you're coming out ahead or behind.
>
> I think it should be possible to figure this out though. Bloom fillers have
> well understood math for the error rate given the size and number of hash
> functions (and please read up on it and implement the optimal combination
> for the target error rate, not just an wag) and it should be possible to
> determine the resulting advantage.
>
> Measuring the cost of the memory usage is harder but some empirical results
> should give some idea.  I expect the result to be wildly uneven one way or
> the other so hopefully it doesn't matter of its not perfect. If it's close
> then probably is not worth doing anyways.
>
> I would suggest looking up the archives of the previous discussion. You mind
> find the patch still usable. Iirc there's been no major changes to the hash
> join code.
>
Right, I will definitely have a look on the thread. Thanks for the info!

Regards,

Atri


-- 
Regards,

Atri
l'apprenant