Le 03/01/2020 à 15:50, Jeff Janes a écrit :
osm=# explain analyze execute mark_ways_by_node(1836953770);
QUERY PLAN
--------------------------------------------------------------------------------------------------------------------------------------
Bitmap Heap Scan on planet_osm_ways (cost=2468.37..305182.32 rows=301467 width=8) (actual time=0.039..0.042 rows=2 loops=1)
Recheck Cond: (nodes && '{1836953770}'::bigint[])
(My quick and dirty patch posted there still compiles and works, if you would like to test that it fixes the problem for you.)
Because the number of rows is vastly overestimated, so is the cost. Which then causes JIT to kick in counter-productively, due to the deranged cost exceeding jit_above_cost.
Cheers,
Jeff
This wrong cost may have other side effect, like launching parallel workers.
Another person got the same problem, but my simple fix of disabling jit did not make it for him. My test were done on a smaller database (OpenStreetMap data extract only covering France), his was on a full planet dataset. The computed rows where 10x higher.
We found a workaround (disabling jit and parallel workers for the session), but a more global fix on this wrong evaluation of rows should be considered for other cases ;)
Thanks for your time on this issue.
Christian