Re: Assertion failure with LEFT JOINs among >500 relations

Поиск
Список
Период
Сортировка
От David Rowley
Тема Re: Assertion failure with LEFT JOINs among >500 relations
Дата
Msg-id CAApHDvqnG4_a0sU0USLeTNDMe-giQ6howEmEfB9SUc_k1vhMxA@mail.gmail.com
обсуждение исходный текст
Ответ на Assertion failure with LEFT JOINs among >500 relations  (Onder Kalaci <onderk@microsoft.com>)
Ответы Re: Assertion failure with LEFT JOINs among >500 relations  (Tom Lane <tgl@sss.pgh.pa.us>)
Список pgsql-hackers
On Fri, 9 Oct 2020 at 08:16, Onder Kalaci <onderk@microsoft.com> wrote:
> I hit an assertion failure. When asserts disabled, it works fine even with more tables  (>5000).
>
> Steps to reproduce:
> CREATE TABLE users_table (user_id int, time timestamp, value_1 int, value_2 int, value_3 float, value_4 bigint);
> 250 relations work fine, see the query (too long to copy & paste here):
https://gist.github.com/onderkalaci/2b40a18d989da389ee4fb631e1ad7c0e#file-steps_to_assert_pg-sql-L41

I had a quick look at this and I can recreate it using the following
(using psql)

select 'explain select count(*) from users_table ' || string_Agg('LEFT
JOIN users_table u'|| x::text || ' USING (user_id)',' ') from
generate_Series(1,379)x;
\gexec

That triggers the assert due to the Assert(outer_skip_rows <=
outer_rows); failing in initial_cost_mergejoin().

The reason it fails is that outer_path_rows has become infinity due to
calc_joinrel_size_estimate continually multiplying in the join
selectivity of 0.05 (due to our 200 default num distinct from lack of
any stats) which after a number of iterations causes the number to
become very large.

Instead of running 379 joins from above, try with 378 and you get:

 Aggregate  (cost=NaN..NaN rows=1 width=8)
   ->  Nested Loop Left Join  (cost=33329.16..NaN rows=Infinity width=0)
         Join Filter: (users_table.user_id = u378.user_id)
         ->  Merge Left Join  (cost=33329.16..<very large number> width=4)
               Merge Cond: (users_table.user_id = u377.user_id)
               ->  Merge Left Join  (cost=33240.99..<very large number> width=4)

Changing the code in initial_cost_mergejoin() to add:

if (outer_path_rows <= 0 || isnan(outer_path_rows))
    outer_path_rows = 1;
+else if (isinf(outer_path_rows))
+    outer_path_rows = DBL_MAX;

does seem to fix the problem, but that's certainly not the right fix.

Perhaps the right fix is to modify clamp_row_est() with:

@@ -193,7 +194,9 @@ clamp_row_est(double nrows)
         * better and to avoid possible divide-by-zero when interpolating costs.
         * Make it an integer, too.
         */
-       if (nrows <= 1.0)
+       if (isinf(nrows))
+               nrows = rint(DBL_MAX);
+       else if (nrows <= 1.0)
                nrows = 1.0;
        else
                nrows = rint(nrows);

but the row estimates are getting pretty insane well before then.
DBL_MAX is 226 orders of magnitude more than the estimated number of
atoms in the observable universe, so it seems pretty unreasonable that
someone might figure out a way to store that many tuples on a disk any
time soon.

Perhaps DBL_MAX is way to big a number to clamp at. I'm just not sure
what we should reduce it to so that it is reasonable.

David



В списке pgsql-hackers по дате отправления:

Предыдущее
От: John Naylor
Дата:
Сообщение: Re: speed up unicode normalization quick check
Следующее
От: Tom Lane
Дата:
Сообщение: Re: Assertion failure with LEFT JOINs among >500 relations