Обсуждение: BUG #18466: Wrong row estimate for nested loop
The following bug has been logged on the website:
Bug reference: 18466
Logged by: Yan Wu
Email address: 4wuyan@gmail.com
PostgreSQL version: 16.3
Operating system: Debian Linux
Description:
Hello everyone, I notice the row estimate for a nested loop is wrong if a
cte is involved.
## How to reproduce
You can reproduce it in a docker container. I originally saw it on AWS
Aurora.
```bash
docker run --name test-postgres -e POSTGRES_PASSWORD=mysecretpassword -d -p
5432:5432 postgres:16.3
```
Simple table setup
```sql
create table t1(a int);
create table t2(b int);
create index my_index on t1 using btree (a);
insert into t1 select generate_series(1, 100000) from generate_series(1,
3);
insert into t2 select generate_series(1, 100) from generate_series(1, 10);
analyze t1;
analyze t2;
/* Optionally make sure nested loop is used /*
-- set enable_mergejoin = off;
-- set enable_hashjoin = off;
```
The row estimate for the following plan is wrong: 2980 / 200 is not close to
3.
```
postgres=# explain with my_cte as materialized (select b from t2) select *
from t1 where t1.a in (select b from my_cte);
QUERY PLAN
-------------------------------------------------------------------------------
Nested Loop (cost=37.92..2674.77 rows=2980 width=4)
CTE my_cte
-> Seq Scan on t2 (cost=0.00..15.00 rows=1000 width=4)
-> HashAggregate (cost=22.50..24.50 rows=200 width=4)
Group Key: my_cte.b
-> CTE Scan on my_cte (cost=0.00..20.00 rows=1000 width=4)
-> Index Only Scan using my_index on t1 (cost=0.42..13.15 rows=3
width=4)
Index Cond: (a = my_cte.b)
(8 rows)
```
## Expected output
A simple `distinct` will give the same plan, but with the correct row
estimate:
```
postgres=# explain with my_cte as materialized (select b from t2) select *
from t1 where t1.a in (select distinct b from my_cte);
QUERY PLAN
-------------------------------------------------------------------------------
Nested Loop (cost=37.92..2674.77 rows=596 width=4)
CTE my_cte
-> Seq Scan on t2 (cost=0.00..15.00 rows=1000 width=4)
-> HashAggregate (cost=22.50..24.50 rows=200 width=4)
Group Key: my_cte.b
-> CTE Scan on my_cte (cost=0.00..20.00 rows=1000 width=4)
-> Index Only Scan using my_index on t1 (cost=0.42..13.15 rows=3
width=4)
Index Cond: (a = my_cte.b)
(8 rows)
```
This is my expected output: 596 / 200 is approximately 3.
If you compare the two plans, you can see they are exactly the same plan
with the same cost estimate too. The only difference is `rows=2980` and
`rows=596` for the Nested Loop node. It seems `rows=1000` instead of
`rows=200` is used for the HashAggregate node in the wrong output.
This query is simple and trivial. But it's for demo purpose only. The query
may be a part of a bigger query, and the wrong row estimate can lead to a
bad execution plan for the big query.
## Postgres version
16.3
## Platform information
Latest postgres docker image, which is based on Debian Linux. Originally
found the issue on AWS Aurora.
On Wed, 15 May 2024 at 21:07, PG Bug reporting form
<noreply@postgresql.org> wrote:
> Hello everyone, I notice the row estimate for a nested loop is wrong if a
> cte is involved.
The yet-to-be-released PostgreSQL 17 should have some code which
improves this [1].
With your test case on that version, I see:
postgres=# explain analyze with my_cte as materialized (select b from
t2) select *
postgres-# from t1 where t1.a in (select b from my_cte);
QUERY PLAN
--------------------------------------------------------------------------------------------------------------------------
Nested Loop (cost=37.92..469.00 rows=296 width=4) (actual
time=0.574..0.893 rows=300 loops=1)
CTE my_cte
-> Seq Scan on t2 (cost=0.00..15.00 rows=1000 width=4) (actual
time=0.015..0.079 rows=1000 loops=1)
-> HashAggregate (cost=22.50..23.50 rows=100 width=4) (actual
time=0.394..0.406 rows=100 loops=1)
Group Key: my_cte.b
Batches: 1 Memory Usage: 24kB
-> CTE Scan on my_cte (cost=0.00..20.00 rows=1000 width=4)
(actual time=0.025..0.229 rows=1000 loops=1)
-> Index Only Scan using my_index on t1 (cost=0.42..4.27 rows=3
width=4) (actual time=0.004..0.004 rows=3 loops=100)
Index Cond: (a = my_cte.b)
Heap Fetches: 0
There are no plans to backpatched this improvement into PostgreSQL 16.
It's not really classed as a bug, just something that could have been
improved... which is now is.
David
[1] https://git.postgresql.org/gitweb/?p=postgresql.git;a=commit;h=f7816aec2
Hi David, thanks for the reply!
The reason I submitted it as a bug is, I feel it's a wrong output by the planner: the nested loop node picks up the incorrect row estimate from the HashAggregate node as the multiplier. It's a very minor issue though, which causes slowness at worst.
Really glad to see it's improved in Postgres 17!
On Wed, 15 May 2024 at 21:39, David Rowley <dgrowleyml@gmail.com> wrote:
On Wed, 15 May 2024 at 21:07, PG Bug reporting form
<noreply@postgresql.org> wrote:
> Hello everyone, I notice the row estimate for a nested loop is wrong if a
> cte is involved.
The yet-to-be-released PostgreSQL 17 should have some code which
improves this [1].
With your test case on that version, I see:
postgres=# explain analyze with my_cte as materialized (select b from
t2) select *
postgres-# from t1 where t1.a in (select b from my_cte);
QUERY PLAN
--------------------------------------------------------------------------------------------------------------------------
Nested Loop (cost=37.92..469.00 rows=296 width=4) (actual
time=0.574..0.893 rows=300 loops=1)
CTE my_cte
-> Seq Scan on t2 (cost=0.00..15.00 rows=1000 width=4) (actual
time=0.015..0.079 rows=1000 loops=1)
-> HashAggregate (cost=22.50..23.50 rows=100 width=4) (actual
time=0.394..0.406 rows=100 loops=1)
Group Key: my_cte.b
Batches: 1 Memory Usage: 24kB
-> CTE Scan on my_cte (cost=0.00..20.00 rows=1000 width=4)
(actual time=0.025..0.229 rows=1000 loops=1)
-> Index Only Scan using my_index on t1 (cost=0.42..4.27 rows=3
width=4) (actual time=0.004..0.004 rows=3 loops=100)
Index Cond: (a = my_cte.b)
Heap Fetches: 0
There are no plans to backpatched this improvement into PostgreSQL 16.
It's not really classed as a bug, just something that could have been
improved... which is now is.
David
[1] https://git.postgresql.org/gitweb/?p=postgresql.git;a=commit;h=f7816aec2