Pavel Tavoda <pavel.tavoda@gmail.com> writes:
> -> Seq Scan on contractportfolio cp (cost=0.00..109171.90 rows=1 width=8) (actual time=0.009..486.918
rows=20426loops=1)
> Filter: ((validfor >= '2017-05-31'::date) AND (validfor <= '2017-05-31'::date))
> STUNNING!!!!
Yup, it's certainly that factor-of-20K rowcount misestimate that is
killing you here. Given that this estimate isn't too bad:
> -> Seq Scan on contractportfolio cp (cost=0.00..101496.91 rows=20197 width=8) (actual time=0.009..463.063
rows=20426loops=1)
> Filter: (validfor = '2017-05-31'::date)
I do not think your problem is one of out-of-date statistics. Rather,
the issue is just that we're bad at narrow range estimates. I did
some work on that today[1] but it won't show up in a released PG version
before next year. In the meantime, I believe that the existing code would
arrive at a plausible answer if the value being checked were present in
the column's pg_stats.most_common_vals list. Maybe you could fix this
by increasing the statistics target for the column or the whole table
(and re-analyzing it, of course).
regards, tom lane
[1] https://www.postgresql.org/message-id/12232.1499140410%40sss.pgh.pa.us
--
Sent via pgsql-bugs mailing list (pgsql-bugs@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-bugs