The main question i think is what to do with it.
-> Hash Join (cost=9337.97..18115.71 rows=34489 width=244) (actual time=418.054..1156.453 rows=205420 loops=1)
Hash Cond: (customerdetails.customerid =
entity.id)
-> Seq Scan on customerdetails (cost=0.00..4752.46 rows=327146 width=13) (actual time=0.021..176.389 rows=327328 loops=1)
-> Hash (cost=6495.65..6495.65 rows=227386 width=231) (actual time=417.839..417.839 rows=205420 loops=1)
Buckets: 32768 Batches: 1 Memory Usage: 16056kB
-> Index Scan using entity_setype_idx on entity (cost=0.00..6495.65 rows=227386 width=231) (actual time=0.033..2
53.880 rows=205420 loops=1)
Index Cond: ((setype)::text = 'con_s'::text)
-> Index Scan using con_address_pkey on con_address (cost=0.00..0.27 rows=1 width=46) (actual time=0.003..0.004 rows=1 loops=2054
20)
As you see access methods estimates are ok, it is join result set which is wrong.
How to deal with it?
May be a hack with CTE can help, but is there a way to improve statistics correlation?