Perhaps consider running manual vacuum analyze at low load times daily if you have that opportunity. This may stop
autovacuumsfrom hitting thresholds during high load times or do the normal/aggressive autovacuum tuning to make it more
aggressiveduring low load times and less aggressive during high load times.
Sent from my iPad
> On Dec 19, 2017, at 5:03 PM, Tomas Vondra <tomas.vondra@2ndquadrant.com> wrote:
>
>
>
>> On 12/19/2017 05:47 PM, Habib Nahas wrote:
>> Hi,
>>
>> We operate an RDS postgres 9.5 instance and have periodic CPU spikes to
>> 100%. These spikes appear to be due to autoanalyze kicking on our larger
>> tables.
>>
>> Our largest table has 75 million rows and the autoanalyze scale factor
>> is set to 0.05.
>>
>> The documentation I've read suggests that the analyze always operates on
>> the entire table and is not incremental. Given that supposition are
>> there ways to control cost(especially CPU) of the autoanalyze operation?
>> Would a more aggressive autoanalyze scale factor (0.01) help. With the
>> current scale factor we see an autoanalyze once a week, query
>> performance has been acceptable so far, which could imply that scale
>> factor could be increased if necessary.
>>
>
> No, reducing the scale factor to 0.01 will not help at all, it will
> actually make the issue worse. The only thing autoanalyze does is
> running ANALYZE, which *always* collects a fixed-size sample. Making it
> more frequent will not reduce the amount of work done on each run.
>
> So the first question is if you are not using the default (0.1), i.e.
> have you reduced it to 0.05.
>
> The other question is why it's so CPU-intensive. Are you using the
> default statistics_target value (100), or have you increased that too?
>
> regards
>
> --
> Tomas Vondra http://www.2ndQuadrant.com
> PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services
>