Обсуждение: Hash-based MCV matching for large IN-lists

Поиск
Список
Период
Сортировка

Hash-based MCV matching for large IN-lists

От
Ilia Evdokimov
Дата:

Hi hackers,

When estimating selectivity for ScalarArrayOpExpr (IN, ANY, ALL) and MCV statistics are available for the column, the planner currently matches IN-list elements against the MCV array using nested loop. For large IN-list and large MCV arrays this results in O(N*M) behavior, which can become unnecessarily expensive during planning.

Thanks to David for pointing out this case [0]

This patch introduces a hash-based matching path, analogous to what is already done for MCV matching in join selectivity estimation (057012b commit). Instead of linearly scanning the MCV array for each IN-list element, we build a hash table and probe it to identify matches.

The hash table is built over the MCV values, not over the IN-list. The IN-list may contain NULLs, non-Const expressions, and duplicate values, whereas the MCV list is guaranteed to contain distinct, non-NULL values and represents the statistically meaningful domain we are matching against. Hashing the MCVs therefore avoids duplicate work and directly supports selectivity estimation.

For each IN-list element, if a matching MCV is found, we add the corresponding MCV frequency to the selectivity estimate. If no match is found, the remaining selectivity is estimated in the same way as the existing non-MCV path (similar to var_eq_const when the constant is not present in the MCV list).

The hash-based path is enabled only when both a sufficiently large IN-list and an MCV list are present, and suitable hash functions exist for the equality operator. The threshold is currently the same as the one used for join MCV hashing, since the underlying algorithmic tradeoffs are similar.

Example:

CREATE TABLE t (x int);
INSERT INTO t SELECT x % 10000 FROM generate_series(1, 3000000) x;
ALTER TABLE t ALTER COLUMN x SET STATISTICS 10000;
ANALYZE t;

Before patch:
EXPLAIN (SUMMARY) SELECT * FROM t WHERE x IN (1,2,...,2000);
Seq Scan on t  (cost=5.00..58280.00 rows=600000 width=4)
   Filter: (x = ANY ('{1,2,...,2000}'::integer[]))
 Planning Time: 57.137 ms
(3 rows)

After patch:
EXPLAIN (SUMMARY) SELECT * FROM t WHERE x IN (1,2,...,2000);
Seq Scan on t  (cost=5.00..58280.00 rows=600000 width=4)
   Filter: (x = ANY ('{1,2,...,2000}'::integer[]))
 Planning Time: 0.558 ms
(3 rows)

Comments, suggestions, and alternative approaches are welcome!

[0]: https://www.postgresql.org/message-id/b6316b99-565b-4c89-aa08-6aea51f54526%40gmail.com

-- 
Best regards,
Ilia Evdokimov,
Tantor Labs LLC,
https://tantorlabs.com/

Вложения

Re: Hash-based MCV matching for large IN-lists

От
David Geier
Дата:
Hi Ilia!

On 29.12.2025 21:35, Ilia Evdokimov wrote:
> Hi hackers,
> 
> When estimating selectivity for ScalarArrayOpExpr (IN, ANY, ALL) and MCV
> statistics are available for the column, the planner currently matches
> IN-list elements against the MCV array using nested loop. For large IN-
> list and large MCV arrays this results in O(N*M) behavior, which can
> become unnecessarily expensive during planning.
> 
> Thanks to David for pointing out this case [0]
> 

Cool that you tackled this. I've seen this happening a lot in practice.

> This patch introduces a hash-based matching path, analogous to what is
> already done for MCV matching in join selectivity estimation (057012b
> commit). Instead of linearly scanning the MCV array for each IN-list
> element, we build a hash table and probe it to identify matches.
> 
> The hash table is built over the MCV values, not over the IN-list. The
> IN-list may contain NULLs, non-Const expressions, and duplicate values,
> whereas the MCV list is guaranteed to contain distinct, non-NULL values
> and represents the statistically meaningful domain we are matching
> against. Hashing the MCVs therefore avoids duplicate work and directly
> supports selectivity estimation.

The downside of doing it this way is that we always pay the price of
building a possibly big hash table if the column has a lot of MCVs, even
for small IN lists. Why can't we build the hash table always on the
smaller list, like we do already in the join selectivity estimation?

For NULL we can add a flag to the hash entry, non-Const expressions must
anyways be evaluated and duplicate values will be discarded during insert.

> 
> For each IN-list element, if a matching MCV is found, we add the
> corresponding MCV frequency to the selectivity estimate. If no match is
> found, the remaining selectivity is estimated in the same way as the
> existing non-MCV path (similar to var_eq_const when the constant is not
> present in the MCV list).
> 

The code in master currently calls an operator-specific selectivity
estimation function. For equality this is typically eqsel() but the
function can be specified during CREATE OPERATOR.

Can be safely special-case the behavior of eqsel() for all possible
operators for the ScalarArrayOpExpr case?

> The hash-based path is enabled only when both a sufficiently large IN-
> list and an MCV list are present, and suitable hash functions exist for
> the equality operator. The threshold is currently the same as the one
> used for join MCV hashing, since the underlying algorithmic tradeoffs
> are similar.

Seems reasonable.

I'll test and review in more detail once we clarified the design.

--
David Geier



Re: Hash-based MCV matching for large IN-lists

От
Ilia Evdokimov
Дата:

Hi David!

Thanks for feedback.

On 05.01.2026 11:54, David Geier wrote:
This patch introduces a hash-based matching path, analogous to what is
already done for MCV matching in join selectivity estimation (057012b
commit). Instead of linearly scanning the MCV array for each IN-list
element, we build a hash table and probe it to identify matches.

The hash table is built over the MCV values, not over the IN-list. The
IN-list may contain NULLs, non-Const expressions, and duplicate values,
whereas the MCV list is guaranteed to contain distinct, non-NULL values
and represents the statistically meaningful domain we are matching
against. Hashing the MCVs therefore avoids duplicate work and directly
supports selectivity estimation.
The downside of doing it this way is that we always pay the price of
building a possibly big hash table if the column has a lot of MCVs, even
for small IN lists. Why can't we build the hash table always on the
smaller list, like we do already in the join selectivity estimation?

For NULL we can add a flag to the hash entry, non-Const expressions must
anyways be evaluated and duplicate values will be discarded during insert.


After thinking more about this I realized that this is actually a better match for how selectivity is currently modeled. After this comments in master

         * If we were being really tense we would try to confirm that the
         * elements are all distinct, but that would be expensive and it
         * doesn't seem to be worth the cycles; it would amount to penalizing
         * well-written queries in favor of poorly-written ones.  However, we
         * do protect ourselves a little bit by checking whether the
         * disjointness assumption leads to an impossible (out of range)
         * probability; if so, we fall back to the normal calculation.

when the hash table is built on the IN-list, duplicate IN-list values are automatically eliminated during insertion, so we no longer risk summing the same MCV frequency multiple times. This makes the disjoint-probability estimate more robust and in practice slightly more accurate.

One thing I initially missed that there are actually three different places where ScalarArrayOpExpr is handled - the Const array case, the ArrayExpr case and others - and Const and ArrayExpr require different implementation of the same idea. In Const case we can directly hash and probe Datum value, while ArrayExpr case we must work on Node* element, separating constant and non-constant entries and only hashing the constants. The current v2 therefore applies the same MCV-hash optimization in both branches, but using two tailored code paths that preserve the existing semantics of how non-Const elements are handled by var_eq_non_const().

If the MCV list is smaller than the IN-list, the behavior is the same as in v1 of the patch. If the IN-list is smaller, we instead build a hash table over the distinct constant elements of the IN-list and then:
- Scan the MCV list and sum the frequencies of those MCVs that appear in the IN-list;
- Count how many distinct IN-list not null constant elements are not present in the MCV list;
- Estimate the probability of each such non-MCV value using the remaining frequency mass;
- Handle non-constant IN-list elements separately using var_eq_non_const(), exactly as in the existing implementation.


For each IN-list element, if a matching MCV is found, we add the
corresponding MCV frequency to the selectivity estimate. If no match is
found, the remaining selectivity is estimated in the same way as the
existing non-MCV path (similar to var_eq_const when the constant is not
present in the MCV list).

The code in master currently calls an operator-specific selectivity
estimation function. For equality this is typically eqsel() but the
function can be specified during CREATE OPERATOR.

Can be safely special-case the behavior of eqsel() for all possible
operators for the ScalarArrayOpExpr case?


Unfortunately there is no safe way to make this optimization generic for arbitrary restrict functions, because a custom RESTRICT function does not have to use MCVs at all. IMO, in practice the vast majority of ScalarArrayOpExpr uses with = or <> rely on the built-in equality operators whose selectivity is computed by eqsel()/neqsel(), so I limited this optimization to those cases.

I’ve attached v2 of the patch. It currently uses two fairly large helper functions for the Const and ArrayExpr cases; this is intentional to keep the logic explicit and reviewable, even though these will likely need refactoring or consolidation later.

-- 
Best regards,
Ilia Evdokimov,
Tantor Labs LLC,
https://tantorlabs.com/

Вложения

Re: Hash-based MCV matching for large IN-lists

От
David Geier
Дата:
On 14.01.2026 11:19, Ilia Evdokimov wrote:
> After thinking more about this I realized that this is actually a better
> match for how selectivity is currently modeled. After this comments in
> master
> 
>          * If we were being really tense we would try to confirm that the
>          * elements are all distinct, but that would be expensive and it
>          * doesn't seem to be worth the cycles; it would amount to
> penalizing
>          * well-written queries in favor of poorly-written ones.
> However, we
>          * do protect ourselves a little bit by checking whether the
>          * disjointness assumption leads to an impossible (out of range)
>          * probability; if so, we fall back to the normal calculation.
> 
> when the hash table is built on the IN-list, duplicate IN-list values
> are automatically eliminated during insertion, so we no longer risk
> summing the same MCV frequency multiple times. This makes the disjoint-
> probability estimate more robust and in practice slightly more accurate.

Does that mean that we get a different estimation result, depending on
if the IN list is smaller or not? I think we should avoid that because
estimation quality might flip for the user unexpectedly.

> One thing I initially missed that there are actually three different
> places where ScalarArrayOpExpr is handled - the Const array case, the
> ArrayExpr case and others - and Const and ArrayExpr require different
> implementation of the same idea. In Const case we can directly hash and
> probe Datum value, while ArrayExpr case we must work on Node* element,
> separating constant and non-constant entries and only hashing the
> constants. The current v2 therefore applies the same MCV-hash
> optimization in both branches, but using two tailored code paths that
> preserve the existing semantics of how non-Const elements are handled by
> var_eq_non_const().
> 
> If the MCV list is smaller than the IN-list, the behavior is the same as
> in v1 of the patch. If the IN-list is smaller, we instead build a hash
> table over the distinct constant elements of the IN-list and then:
> - Scan the MCV list and sum the frequencies of those MCVs that appear in
> the IN-list;
> - Count how many distinct IN-list not null constant elements are not
> present in the MCV list;

Is this to make sure we keep getting the same estimation result if the
IN list is smaller and contains duplicates?

> - Estimate the probability of each such non-MCV value using the
> remaining frequency mass;
> - Handle non-constant IN-list elements separately using
> var_eq_non_const(), exactly as in the existing implementation.

OK

>>>
>> The code in master currently calls an operator-specific selectivity
>> estimation function. For equality this is typically eqsel() but the
>> function can be specified during CREATE OPERATOR.
>>
>> Can be safely special-case the behavior of eqsel() for all possible
>> operators for the ScalarArrayOpExpr case?
> 
> 
> Unfortunately there is no safe way to make this optimization generic for
> arbitrary restrict functions, because a custom RESTRICT function does
> not have to use MCVs at all. IMO, in practice the vast majority of
> ScalarArrayOpExpr uses with = or <> rely on the built-in equality
> operators whose selectivity is computed by eqsel()/neqsel(), so I
> limited this optimization to those cases.

How did you do that? I cannot find the code that checks for that.

> I’ve attached v2 of the patch. It currently uses two fairly large helper
> functions for the Const and ArrayExpr cases; this is intentional to keep
> the logic explicit and reviewable, even though these will likely need
> refactoring or consolidation later.

Beyond that, it seems like you can also combine/reuse a bunch of code
for creating the hash map on the IN vs on the MCV list.

For the MCVs, can't we reuse some code from the eqjoinsel() optimization
we did? The entry and context structs look similar enough to only need one.

Making the code more compact would ease reviewing a lot.

--
David Geier



Re: Hash-based MCV matching for large IN-lists

От
Ilia Evdokimov
Дата:

Hi,

On 19.01.2026 17:01, David Geier wrote:
Does that mean that we get a different estimation result, depending on
if the IN list is smaller or not? I think we should avoid that because
estimation quality might flip for the user unexpectedly.

I think you're right.

To address this, I changed the hash-table entry to track an additional 'count' filed, representing how many times a particular value appears on the hashed side. When inserting into the hash table, if the value is already present, I increment 'count', otherwise, I create a new entry with count = 1


The code in master currently calls an operator-specific selectivity
estimation function. For equality this is typically eqsel() but the
function can be specified during CREATE OPERATOR.

Can be safely special-case the behavior of eqsel() for all possible
operators for the ScalarArrayOpExpr case?

Unfortunately there is no safe way to make this optimization generic for
arbitrary restrict functions, because a custom RESTRICT function does
not have to use MCVs at all. IMO, in practice the vast majority of
ScalarArrayOpExpr uses with = or <> rely on the built-in equality
operators whose selectivity is computed by eqsel()/neqsel(), so I
limited this optimization to those cases.
How did you do that? I cannot find the code that checks for that.

In scalararraysel(), before attempting the hash-based path, we determine whether the operator behaves like equality or inequality based on its selectivity function:

if (oprsel == F_EQSEL || oprsel == F_EQJOINSEL)
    isEquality = true;
else if (oprsel == F_NEQSEL || oprsel == F_NEQJOINSEL)
    isInequality = true;

Then the hash-based MCV matching is only attempted under:

if ((isEquality || isInequality) && !is_join_clause)

So effectively this restricts the optimization to operators whose selectivity is computed by eqsel()/neqsel() on restriction clauses. Join clauses (which would use eqjoinsel/neqjoinsel) are excluded via !is_join_clause


For the MCVs, can't we reuse some code from the eqjoinsel() optimization
we did? The entry and context structs look similar enough to only need one.

I considered reusing pieces from the eqjoinsel() , but in practice it turned out to be difficult to share code cleanly. Also, when looking at this file more broadly, we already have multiple places that reimplement similar pattern.


Making the code more compact would ease reviewing a lot.

Agreed — I also think making the code more compact would significantly ease reviewing. I’ve found a way to unify the Const-array and ArrayExpr cases: in the ArrayExpr path, we can first construct the same arrays as in the Const-array case (elem_values, elem_nulls), and additionally build a boolean array elem_const[] indicating whether each element is a Const. Then the hash-based MCV matching function can:

- Ignore NULL and non-Const elements when building and probing the hash table.
- Count how many non-Const elements are present.
- After MCV and non-MCV constant handling, account for non-Const elements separately using var_eq_non_const() and fold their probabilities into the same ANY/ALL accumulation logic.

I've attached v3 patch with it.

To validate the same estimation results, I temporarily kept both implementations (hash-based and nested-loop) and compared their resulting selectivity values. Whenever they differed, I logged it. I ran regression tests and some local workload testing with this check enabled, and did not observe any mismatches. I attached patch with this logging.

-- 
Best regards,
Ilia Evdokimov,
Tantor Labs LLC,
https://tantorlabs.com/


Вложения

Re: Hash-based MCV matching for large IN-lists

От
Chengpeng Yan
Дата:

> On Jan 14, 2026, at 18:19, Ilia Evdokimov <ilya.evdokimov@tantorlabs.com> wrote:
> I’ve attached v2 of the patch. It currently uses two fairly large helper functions for the Const and ArrayExpr cases;
thisis intentional to keep the logic explicit and reviewable, even though these will likely need refactoring or
consolidationlater.
 

Thanks for working on this.

I had previously reviewed the v2 patch and wrote up some comments, but
didn’t get a chance to send them before v3 was posted. I haven’t yet had
time to review v3 in detail, so I’m not sure whether the issues below
have already been addressed there. I’m posting my earlier review notes
first and will follow up with comments on v3 once I’ve had a chance to
look at it.

* Treat NULL array elements as zero selectivity for ALL:

In `scalararray_mcv_hash_match_const()` (and similarly
`scalararray_mcv_hash_match_expr()`), NULL array elements are currently
handled by simply continuing the loop (e.g. `if (elem_nulls[i])
continue;`), effectively ignoring them.

This behavior is only correct for ANY/OR semantics. For ALL/AND (`useOr
= false`), a single NULL array element causes the `ScalarArrayOpExpr` to
never return TRUE for strict operators (as assumed by the surrounding
code and comments). In that case, the correct selectivity estimate
should be 0.0, but the current code path can return a non-zero
selectivity.


* Fix cross-type equality argument order in `mcvs_in_equal`:

`mcvs_in_equal()` always invokes the equality function as `(key0,
key1)`. However, `simplehash` provides `key0` from the hash table and
`key1` as the probe key.

In the branch where the hash table is built over IN-list values and
probed with MCVs (the `sslot.nvalues > num_elems` path), this reverses
the operator’s argument order for cross-type equality operators. This
risks incorrect match decisions and may misinterpret Datums compared to
the operator’s declared signature.


* Include non-MCV IN-list constants in non-disjoint selectivity:

In the `sslot.nvalues > num_elems` path of
`scalararray_mcv_hash_match_const()` and
`scalararray_mcv_hash_match_expr()`, non-MCV constant elements currently
only contribute via `disjoint_sel`.

For cases where disjoint-probability estimation is not used (e.g. ALL,
`<> ANY`, or when `disjoint_sel` is out of range), the code leaves the
selectivity based solely on MCV matches. This effectively treats non-MCV
constants as having probability 1.0, leading to overestimation of
selectivity.


* Avoid double-negating inequality estimates for non-Const elements:

In the `scalararray_mcv_hash_match_expr()` `sslot.nvalues > num_elems`
branch, non-Const elements are handled via

`var_eq_non_const(..., negate = isInequality)`

and then later adjusted again with

`if (isInequality)
s1 = 1.0 - s1 - nullfrac;`

This results in a double negation for inequality cases, effectively
turning the estimate back into an equality selectivity.

--
Best regards,
Chengpeng Yan

Re: Hash-based MCV matching for large IN-lists

От
Ilia Evdokimov
Дата:

Hi Chengpeng,

Thanks for your review!


On 28.01.2026 16:08, Chengpeng Yan wrote:
* Treat NULL array elements as zero selectivity for ALL:

Agreed. For ALL/AND semantics the function now returns selectivity = 0.0 as soon as a NULL element is encountered.


* Fix cross-type equality argument order in `mcvs_in_equal`:

Agreed. Added 'op_is_reserved' flag MCVInHashContext, same as in MCVHashContext.


* Include non-MCV IN-list constants in non-disjoint selectivity:

This is not applicable to v3.


* Avoid double-negating inequality estimates for non-Const elements:

Agreed. var_eq_non_const() is now always with negate = false, not to call negation twice.


Attached v4 patch with above fixes.

-- 
Best regards,
Ilia Evdokimov,
Tantor Labs LLC,
https://tantorlabs.com/

Вложения

Re: Hash-based MCV matching for large IN-lists

От
David Geier
Дата:
Hi!

> Attached v4 patch with above fixes.

Good progress!

I did another pass over the code, focusing on structure:

- MCVHasContext and MCVInHashContext are identical. MCVHashEntry and
MCVInHashEntry only differ by the count member. I would, as said before,
merge them and simply not use the count member for the join case.

- hash_mcv_in() and mcvs_in_equal() are identical to hash_mcv() and
mcvs_equal(). Let's remove the new functions and use the existing ones
instead, in the spirit of the previous point.

- The threshold constants are also identical. I would merge them into a
single, e.g. MCV_HASH_THRESHOLD, in the spirit of the previous two points.

- MCVHashTable_hash will then be interchangable with
MCVInHashTable_hash. So let's remove MCVInHashTable_hash, in the spirit
of the previous three points.

- Use palloc_array() instead of palloc() when allocating arrays.

- We can avoid allocating the all-true elem_const array by passing NULL
for elem_const to scalararray_mcv_hash_match(), and considering a NULL
pointer to mean "all elements are constant".

- The following comment got copy&pasted from eqsel_internal() twice. It
reads a little strange now because we're not punting here by immediately
returning like in eqsel_internal() but instead fallback to the original
code path. Maybe say instead "... falling back to default code path to
compute default selectivity" or something like that.
    /*
     * If expression is not variable = something or something =
     * variable, then punt and return a default estimate.
     */

- The call to fmgr_info(opfuncoid, &eqproc) is currently under have_mcvs
but can be moved into the next if.

- elem_nulls and elem_const does have to be 0-initialized via palloc0().
All elements are set in the subsequent for-loop. I believe elem_values
also doesn't have to be 0-initialized via palloc0().

- Have you checked there there's test coverage for the special cases
(nvalues_non_mcv > 0, nvalues_nonconst > 0, IN contains NULL,
isEnequality==true, etc.)?  If not let's add tests for these.


I'll do a 2nd iteration, focusing on correctness, once these comments
are addressed and I've got the SQL from you so that I can test the
corner cases manually.

--
David Geier