On Mon, Nov 06, 2017 at 01:18:00PM +0000, Adam Torres wrote:
> Good morning all,
>
> We have a problem with performance after upgrading from 9.3 to 9.6 where certain queries take 9 times longer to run.
Onour initial attempt to upgrade, we noticed the system as a whole was taking longer to run through normal daily
processes. The query with the largest run time was picked to act as a measuring stick.
> https://explain.depesz.com/s/z71u
> Planning time: 8.218 ms
> Execution time: 639319.525 ms
>
> Same query as run on 9.3
> https://explain.depesz.com/s/gjN3
> Total runtime: 272897.150 ms
Actually it looks to me like both query plans are poor..
..because of this:
| Hash Join (cost=85,086.25..170,080.80 ROWS=40 width=115) (actual time=32.673..84.427 ROWS=13,390 loops=1)
| Hash Cond: (av.customer_id = cc_1.id)
If there are a large number of distinct customer_ids (maybe with nearly equal
frequencies), it might help to
ALTER TABLE av ALTER customer_id SET STATISTICS 400
..same for cc_1.id. And re-analyze those tables (are they large??).
see if statistics improve:
SELECT (SELECT sum(x) FROM unnest(most_common_freqs) x) frac_MCV, tablename, attname, n_distinct,
array_length(most_common_vals,1)n_mcv,
FROM pg_stats WHERE attname~'customers_customer' AND tablename='id' GROUP BY 1,2,3,4,5 ORDER BY 1
Goal is to get at least an accurate value for n_distinct (but preferably also
storing the most frequent IDs). I wouldn't bother re-running the query unless
you find that increasing stats target causes the plan to change.
Justin
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance