Обсуждение: 8.0.3 pg_autovacuum doesn't clear out stats table?
Hi, I wonder if anyone can help me to identify why my background pg_autovacuum doesn't appear to clear out the stats table entries? Here is the non-commented lines in my postgresql.conf: max_connections = 100 shared_buffers = 1000 # min 16, at least max_connections*2, 8KB each vacuum_cost_delay = 10 # 0-1000 milliseconds stats_start_collector = true stats_row_level = true lc_messages = 'C' # locale for system error message strings lc_monetary = 'C' # locale for monetary formatting lc_numeric = 'C' # locale for number formatting lc_time = 'C' # locale for time formatting And here is how I start pg_autovacuum: # The PostgreSQL user is: PGUSER=postgres # LOG file is: PG_AUTOVACUUM_LOG=/var/lib/postgresql/data/pg_autovacuum.log # See the pg_autovacuum documentation for the details of how these # parameters affect pg_autovacuum's aggressiveness. VACUUM_BASE=1000 VACUUM_SCALE=2 SLEEP_BASE=300 SLEEP_SCALE=2 start-stop-daemon -o --chuid $PGUSER --start --quiet --exec /usr/bin/pg_autovacuum -- -D -v $VACUUM_BASE -V $VACUUM_SCALE -s $SLEEP_BASE -S $SLEEP_SCALE -L $PG_AUTOVACUUM_LOG The problem I have is that pg_autovacuum doesn't appear to release the unused space in the stats tables and so they grow without bounds. It isn't clear to me that this is costing me anything other than disk space, but when I run a vacuum full on the database it releases 2.5G of space accumulated over about 9 months. Most of my tables are update/append only (i.e. there are very few deletions from the database), and the space does appear to belong to the stats tables. Is this a permissions or configuration issue, or does pg_autovacuum not consider vacuuming of the stats tables? Thanks in advance for any help you can offer me ... Robin
Hi Matt, Thanks for the reply. Matthew T. O'Connor wrote: > What do you mean by stats table entries? Are you saying there is > bloat in a specific table, or just the whole database? > The table in question is pg_statistic. It's pages as reported by pg_class.relpages grow without bound until I perform a vacuum full. I do a "vacuum full" and it drops in size to next to nothing. I have a standby database server configured in the same way, with the same data set, but almost completely idle; I did the vacuum full on it ~1-2 months back when I started to look into this issue; since then it has accumulated 215K relpages (which is, I think, around 1.7G!) in the pg_statistic table. Chasing through pg_database.oid and pg_class.relfilenode I compute that pg_statistic should be stored in $PGDATA/base/17230/16408. Here is what du reports for those files: 1049600 17230/16408 1049600 17230/16408.1 1049600 17230/16408.2 15912 17230/16408.3 Which is a total of just over 3G (this implies to me that the relpages count in the pg_class table is out of date). Looking at another (tiny) database that is definitely only used for testing and I know hasn't been used in months it's pg_statistic has also grown to 1.1G on disk. > Second, I think you need to make autovacuum more aggressive. I think > most people find that the vacuum scaling factor should be more like > 0.3 or so. With the settings you have a 1000 tuple table would have > to see 3000 invalidated tuples before it does a vacuum that is > probably much more slack space than you want. > Valuable advice, but as it happens we delete next to nothing and I think the disk space in use by the database is reasonable for the data stored. > Third: Are you seeing in the autovacuum log that autovacuum is > actually trying to vacuum tables? Turn up the debugging to -d2 that > should give you some more info as to why autovac is (or is not) doing > what it's doing. > I will turn it on and take a look. I am guessing it will tell me that for whatever reason it isn't actually doing anything to the pg_catalog tables ... > Fourth: there will always be some slack space. Under the best of > circumstances, a database running with autovac will be somewhat bigger > than the database right after a VACUUM FULL, probably not 2.5G, but some. > Understood - I am happy for there to be 50% slack - I just don't understand why pg_statistic grows without bound and I am concerned that the size of the stats table might be causing me some performance issues (this is fear rather than fact). Thanks again, Robin
> >> Third: Are you seeing in the autovacuum log that autovacuum is >> actually trying to vacuum tables? Turn up the debugging to -d2 that >> should give you some more info as to why autovac is (or is not) doing >> what it's doing. >> > I will turn it on and take a look. I am guessing it will tell me that > for whatever reason it isn't actually doing anything to the pg_catalog > tables ... So I ran pg_autovacuum with -d 2 on the command line and I see this at startup Feb 23 18:46:42 helm daemon_pg_autovacuum: [2006-02-23 18:46:42 GMT] INFO: Printing command_args Feb 23 18:46:42 helm daemon_pg_autovacuum: [2006-02-23 18:46:42 GMT] INFO: args->host=(null) Feb 23 18:46:42 helm daemon_pg_autovacuum: [2006-02-23 18:46:42 GMT] INFO: args->port=(null) Feb 23 18:46:42 helm daemon_pg_autovacuum: [2006-02-23 18:46:42 GMT] INFO: args->username=(null) Feb 23 18:46:42 helm daemon_pg_autovacuum: [2006-02-23 18:46:42 GMT] INFO: args->password=(null) Feb 23 18:46:42 helm daemon_pg_autovacuum: [2006-02-23 18:46:42 GMT] INFO: args->logfile=(null) Feb 23 18:46:42 helm daemon_pg_autovacuum: [2006-02-23 18:46:42 GMT] INFO: args->daemonize=0 Feb 23 18:46:42 helm daemon_pg_autovacuum: [2006-02-23 18:46:42 GMT] INFO: args->sleep_base_value=300 Feb 23 18:46:42 helm daemon_pg_autovacuum: [2006-02-23 18:46:42 GMT] INFO: args->sleep_scaling_factor=2.000000 Feb 23 18:46:42 helm daemon_pg_autovacuum: [2006-02-23 18:46:42 GMT] INFO: args->vacuum_base_threshold=1000 Feb 23 18:46:42 helm daemon_pg_autovacuum: [2006-02-23 18:46:42 GMT] INFO: args->vacuum_scaling_factor=2.000000 Feb 23 18:46:42 helm daemon_pg_autovacuum: [2006-02-23 18:46:42 GMT] INFO: args->analyze_base_threshold=500 Feb 23 18:46:42 helm daemon_pg_autovacuum: [2006-02-23 18:46:42 GMT] INFO: args->analyze_scaling_factor=1.000000 Feb 23 18:46:42 helm daemon_pg_autovacuum: [2006-02-23 18:46:42 GMT] INFO: args->av_vacuum_cost_delay=(default) Feb 23 18:46:42 helm daemon_pg_autovacuum: [2006-02-23 18:46:42 GMT] INFO: args->av_vacuum_cost_page_hit=(default) Feb 23 18:46:42 helm daemon_pg_autovacuum: [2006-02-23 18:46:42 GMT] INFO: args->av_vacuum_cost_page_miss=(default) Feb 23 18:46:42 helm daemon_pg_autovacuum: [2006-02-23 18:46:42 GMT] INFO: args->av_vacuum_cost_page_dirty=(default) Feb 23 18:46:42 helm daemon_pg_autovacuum: [2006-02-23 18:46:42 GMT] INFO: args->av_vacuum_cost_limit=(default) Feb 23 18:46:42 helm daemon_pg_autovacuum: [2006-02-23 18:46:42 GMT] INFO: args->debug=2 <snip> This is the output for the pg_statistic table in the database that is tiny and is used for testing and has never been "vacuum full" as far as I know: Feb 23 18:46:42 helm daemon_pg_autovacuum: [2006-02-23 18:46:42 GMT] INFO: table name: gems."pg_catalog"."pg_statistic" Feb 23 18:46:42 helm daemon_pg_autovacuum: [2006-02-23 18:46:42 GMT] INFO: relid: 16408; relisshared: 0 Feb 23 18:46:42 helm daemon_pg_autovacuum: [2006-02-23 18:46:42 GMT] INFO: reltuples: 709086.000000; relpages: 78341 Feb 23 18:46:42 helm daemon_pg_autovacuum: [2006-02-23 18:46:42 GMT] INFO: curr_analyze_count: 500680; curr_vacuum_count: 500680 Feb 23 18:46:42 helm daemon_pg_autovacuum: [2006-02-23 18:46:42 GMT] INFO: last_analyze_count: 500680; last_vacuum_count: 500680 Feb 23 18:46:42 helm daemon_pg_autovacuum: [2006-02-23 18:46:42 GMT] INFO: analyze_threshold: 709586; vacuum_threshold: 1419172 Feb 23 18:46:42 helm daemon_pg_autovacuum: [2006-02-23 18:46:42 GMT] DEBUG: added table: gems."pg_catalog"."pg_statistic" <snip> The tuples and thresholds for the pg_statistic table look like they might be the reason it doesn't get vacuumed. The question is, why does the reltuples on the stats table keep climbing until you do a vacuum full, at which point they return to (near) zero? No matter what threshold and scale I set, I am not going to get that table vacuumed because it's delete/update rate are 0 (it's always inserted to I believe). After that I just see: DEBUG: 1 All DBs checked in: 478378 usec, will sleep for 300 secs. DEBUG: 2 All DBs checked in: 73336 usec, will sleep for 300 secs. Implying that there isn't anything doing right now (the staff are all gone, so there is no activity on the database as I type, and we know it won't vacuum the pg_statistics table due to thresholding). What am I missing? Thanks, Robin
Please upgrade. Are you running with the default FSM settings? I'm guessing that pg_statistics has just gotten so large that it's blowing out the FSM. The last few lines from vacuumdb -av would verify that... -- Jim C. Nasby, Sr. Engineering Consultant jnasby@pervasive.com Pervasive Software http://pervasive.com work: 512-231-6117 vcard: http://jim.nasby.net/pervasive.vcf cell: 512-569-9461
Robin Iddon <robin@edesix.com> writes: > The tuples and thresholds for the pg_statistic table look like they > might be the reason it doesn't get vacuumed. The question is, why does > the reltuples on the stats table keep climbing until you do a vacuum > full, at which point they return to (near) zero? No matter what > threshold and scale I set, I am not going to get that table vacuumed > because it's delete/update rate are 0 (it's always inserted to I believe). Quite incorrect --- if that were so, there would be no dead tuples for vacuum to remove, hm? Actually ANALYZE does an update-or-insert-if-not-present fandango. I'm sort of mystified by your report. I can think of several ways that the system might fail to notice that pg_statistic needs vacuuming, but none of them seem to actually occur in the current code, and I don't see any relevant difference in the CVS logs (I admit to being too lazy to build an exact 8.0.3 version to test --- but 8.0.7 seems fine). Can you keep an eye on pg_statistic's entry in the pg_stat_all_tables view for each affected database, and confirm whether the n_tup_upd/ n_tup_del counts are rising over time or not? That would at least narrow down the problem a bit. regards, tom lane
Tom, Tom Lane wrote: > >Quite incorrect --- if that were so, there would be no dead tuples >for vacuum to remove, hm? Actually ANALYZE does an >update-or-insert-if-not-present fandango. > > > Thanks for the correction - I was basing my comments on the fact that the tuple count was just climbing for the pg_statistics table, but I did wonder why the system would do that deliberately - answer is, it doesn't!. >I'm sort of mystified by your report. I can think of several ways that >the system might fail to notice that pg_statistic needs vacuuming, but >none of them seem to actually occur in the current code, and I don't see >any relevant difference in the CVS logs (I admit to being too lazy to >build an exact 8.0.3 version to test --- but 8.0.7 seems fine). >Can you keep an eye on pg_statistic's entry in the pg_stat_all_tables > > >view for each affected database, and confirm whether the n_tup_upd/ >n_tup_del counts are rising over time or not? That would at least >narrow down the problem a bit. > > > As a baseline I see: imes=# select * from pg_stat_all_tables where relname = 'pg_statistic'; relid | schemaname | relname | seq_scan | seq_tup_read | idx_scan | idx_tup_fetch | n_tup_ins | n_tup_upd | n_tup_del -------+------------+--------------+----------+--------------+----------+---------------+-----------+-----------+----------- 16408 | pg_catalog | pg_statistic | 0 | 0 | 4421694 | 4273043 | 0 | 2120401 | 0 (1 row) Now I do: imes=# analyze; ANALYZE imes=# select * from pg_stat_all_tables where relname = 'pg_statistic'; relid | schemaname | relname | seq_scan | seq_tup_read | idx_scan | idx_tup_fetch | n_tup_ins | n_tup_upd | n_tup_del -------+------------+--------------+----------+--------------+----------+---------------+-----------+-----------+----------- 16408 | pg_catalog | pg_statistic | 0 | 0 | 4423073 | 4274484 | 0 | 2121631 | 0 (1 row) From that I see that the pg_statistics table is never inserted/deleted only updated (ins/del counts are 0), and in principle simply analyzing doesn't increase the tuple count. Yet the reltuples in the pg_class table grows, as does the disk space used for the pg_statistics table. What makes it grow? Is it running queries while analyze is running causing new row versions to be created and then not getting cleaned up? I wonder if Matt's suggestion on the vacuum scale is not in fact the real issue here. Because my scale is set to 2 (the default) but the number of inserts is always zero (unless we create new tables/indexes, I think the number of active stats stays the same, no?) and the number of updates is at most equal to the number of tuples, we can never get enough changes to trigger a vacuum (which would require 2x the number of updates). Now any removable rows won't ever get removed. In fact I just noticed that the number of stats tuples just climbed from 1236 to 2634. The ins/del counts are still zero. I ran analyze and the update counter went up only by 1232. For pg_autovacuum to vacuum this table I need (2*2634)+1000 = 6268 updates, which is never going to happen. So I'm going to run with a fractional value of scale for a while and see if it makes things any better. Thanks for all the help, Robin
On Thu, Feb 23, 2006 at 11:34:22PM +0000, Robin Iddon wrote: > From that I see that the pg_statistics table is never inserted/deleted > only updated (ins/del counts are 0), and in principle simply analyzing > doesn't increase the tuple count. Sure it does. An update in PostgreSQL is essentially identical to a DELETE followed by an INSERT, due to MVCC. > I wonder if Matt's suggestion on the vacuum scale is not in fact the > real issue here. Because my scale is set to 2 (the default) but the > number of inserts is always zero (unless we create new tables/indexes, I > think the number of active stats stays the same, no?) and the number of > updates is at most equal to the number of tuples, we can never get > enough changes to trigger a vacuum (which would require 2x the number of > updates). Now any removable rows won't ever get removed. Sure it well, unless pg_autovacuum can't stay running long enough. > In fact I just noticed that the number of stats tuples just climbed from > 1236 to 2634. The ins/del counts are still zero. I ran analyze and the > update counter went up only by 1232. For pg_autovacuum to vacuum this > table I need (2*2634)+1000 = 6268 updates, which is never going to happen. Uh, just run analyze 6 times; that should be enough to cause autovacuum to kick in. > So I'm going to run with a fractional value of scale for a while and see > if it makes things any better. Definately a good idea in any case. You might also consider going to 8.1.3; you'd get built-in autovac which means one less thing to manage, plus it's default settings are a lot saner (though still too high IMHO). -- Jim C. Nasby, Sr. Engineering Consultant jnasby@pervasive.com Pervasive Software http://pervasive.com work: 512-231-6117 vcard: http://jim.nasby.net/pervasive.vcf cell: 512-569-9461
> > In fact I just noticed that the number of stats tuples just climbed > from 1236 to 2634. The ins/del counts are still zero. I ran analyze > and the update counter went up only by 1232. For pg_autovacuum to > vacuum this table I need (2*2634)+1000 = 6268 updates, which is never > going to happen. At least not before the number of tuples has climbed to beyond that count based on my experience to date! Robin
On Thu, Feb 23, 2006 at 11:44:47PM +0000, Robin Iddon wrote: > > > > >In fact I just noticed that the number of stats tuples just climbed > >from 1236 to 2634. The ins/del counts are still zero. I ran analyze > >and the update counter went up only by 1232. For pg_autovacuum to > >vacuum this table I need (2*2634)+1000 = 6268 updates, which is never > >going to happen. > > At least not before the number of tuples has climbed to beyond that > count based on my experience to date! I think you're confusing tuples and rows. Everytime you update a row, you create a new tuple. As you've seen, every time you run analyze, ~1400 rows are being updated, which means 1400 new tuples. So as long as pg_autovacuum stays running long enough, it will notice that pg_statistics needs to be vacuumed. Note that it's critical that pg_autovac stay running; it keeps the counts of how many tuples every table has in memory, so if it gets shutdown before enough updates accumulate, the count re-starts from scratch. I'm not certain if the builtin vacuum in 8.1 solves that problem or not. -- Jim C. Nasby, Sr. Engineering Consultant jnasby@pervasive.com Pervasive Software http://pervasive.com work: 512-231-6117 vcard: http://jim.nasby.net/pervasive.vcf cell: 512-569-9461
Jim C. Nasby wrote: >Please upgrade. > >Are you running with the default FSM settings? I'm guessing that >pg_statistics has just gotten so large that it's blowing out the FSM. >The last few lines from vacuumdb -av would verify that... > > Jim, I am running this in a critical production environment, so I would like to understand whether upgrading is my only option before I do it (I'll need to do some testing first, then organise downtime, etc. - I will do it if required, but would rather a workaround!). If I do upgrade, I assume you mean upgrade to the current 8.1.x release? Yes, I am running with the default FSM settings. Here is the output from vacuumdb -v gems (I ran it against my tiny test database, not the production database right now, but remember that even this tiny test database had an enormous pg_statistic table): <snip> this section relates to vacuuming pg_statistic INFO: vacuuming "pg_catalog.pg_statistic" INFO: index "pg_statistic_relid_att_index" now contains 293 row versions in 15421 pages DETAIL: 2265351 index row versions were removed. 15072 index pages have been deleted, 7480 are currently reusable. CPU 3.25s/1.54u sec elapsed 304.19 sec. <snip> <snip> this is the last few lines as requested INFO: free space map: 182 relations, 25120 pages stored; 98272 total pages needed DETAIL: Allocated FSM size: 1000 relations + 20000 pages = 182 kB shared memory. <snip> Here is the output from vacuumdb -av on my standby server which has a recent copy of the main production data (via pg_dump/pg_restore) but has very few (if any) transactions executed on it since the copy was made. <snip> the section relates to the copy of the production database on the backup server INFO: vacuuming "pg_catalog.pg_statistic" INFO: index "pg_statistic_relid_att_index" now contains 3821225 row versions in 35676 pages DETAIL: 2795920 index row versions were removed. 19879 index pages have been deleted, 11950 are currently reusable. CPU 6.58s/7.35u sec elapsed 321.85 sec. INFO: "pg_statistic": removed 2795920 row versions in 81619 pages DETAIL: CPU 17.89s/4.45u sec elapsed 191.72 sec. INFO: index "pg_statistic_relid_att_index" now contains 1025283 row versions in 35676 pages DETAIL: 2795942 index row versions were removed. 28947 index pages have been deleted, 11950 are currently reusable. CPU 6.15s/6.77u sec elapsed 120.23 sec. INFO: "pg_statistic": removed 2795942 row versions in 82363 pages DETAIL: CPU 18.23s/4.48u sec elapsed 161.45 sec. INFO: index "pg_statistic_relid_att_index" now contains 1554 row versions in 35676 pages DETAIL: 1023729 index row versions were removed. 34045 index pages have been deleted, 11950 are currently reusable. CPU 3.01s/2.31u sec elapsed 46.31 sec. INFO: "pg_statistic": removed 1023729 row versions in 30288 pages DETAIL: CPU 6.96s/1.50u sec elapsed 69.42 sec. INFO: "pg_statistic": found 6615591 removable, 1554 nonremovable row versions in 395203 pages DETAIL: 0 dead row versions cannot be removed yet. There were 7545029 unused item pointers. 0 pages are entirely empty. CPU 64.16s/28.04u sec elapsed 1017.92 sec. <snip> <snip> All of the FSM sections from the whole cluster during the vacuumdb -av run. INFO: free space map: 6 relations, 19936 pages stored; 21632 total pages needed DETAIL: Allocated FSM size: 1000 relations + 20000 pages = 182 kB shared memory. INFO: free space map: 6 relations, 19936 pages stored; 21632 total pages needed DETAIL: Allocated FSM size: 1000 relations + 20000 pages = 182 kB shared memory. INFO: free space map: 6 relations, 19939 pages stored; 21648 total pages needed DETAIL: Allocated FSM size: 1000 relations + 20000 pages = 182 kB shared memory. INFO: free space map: 6 relations, 19939 pages stored; 21648 total pages needed DETAIL: Allocated FSM size: 1000 relations + 20000 pages = 182 kB shared memory. INFO: free space map: 6 relations, 19975 pages stored; 29600 total pages needed DETAIL: Allocated FSM size: 1000 relations + 20000 pages = 182 kB shared memory. <snip> I don't actually understand exactly what this is telling me though! Thanks, Robin
On Thu, Feb 23, 2006 at 11:49:46PM +0000, Robin Iddon wrote: > Jim C. Nasby wrote: > > >Please upgrade. > > > >Are you running with the default FSM settings? I'm guessing that > >pg_statistics has just gotten so large that it's blowing out the FSM. > >The last few lines from vacuumdb -av would verify that... > > > > > Jim, > > I am running this in a critical production environment, so I would like > to understand whether upgrading is my only option before I do it (I'll > need to do some testing first, then organise downtime, etc. - I will do > it if required, but would rather a workaround!). If I do upgrade, I > assume you mean upgrade to the current 8.1.x release? Well, as I mentioned elsewhere, there are some reasons to upgrade to 8.1.x. But no, my comment was specifically about upgrading to the latest 8.0 release. There have been both security and data-loss fixes that you really should get. > Yes, I am running with the default FSM settings. > > Here is the output from vacuumdb -v gems (I ran it against my tiny test > database, not the production database right now, but remember that even > this tiny test database had an enormous pg_statistic table): > > <snip> this is the last few lines as requested > INFO: free space map: 182 relations, 25120 pages stored; 98272 total > pages needed > DETAIL: Allocated FSM size: 1000 relations + 20000 pages = 182 kB > shared memory. > <snip> You need 100k pages minimum here. > I don't actually understand exactly what this is telling me though! Take a look at http://www.pervasivepostgres.com/lp/newsletters/2005/Insights_opensource_Nov.asp#3 It's a good overview of the FSM, how it works, and how to understand vacuum verbose output. Though, I guess my opinion is biased since I wrote it... ;) -- Jim C. Nasby, Sr. Engineering Consultant jnasby@pervasive.com Pervasive Software http://pervasive.com work: 512-231-6117 vcard: http://jim.nasby.net/pervasive.vcf cell: 512-569-9461
>You need 100k pages minimum here. > > > >>I don't actually understand exactly what this is telling me though! >> >> > >Take a look at >http://www.pervasivepostgres.com/lp/newsletters/2005/Insights_opensource_Nov.asp#3 >It's a good overview of the FSM, how it works, and how to understand >vacuum verbose output. Though, I guess my opinion is biased since I >wrote it... ;) > > OK, I read that and understand now what is happening - thanks! (1) I think the FSM was a possible cause of the problem - the pg_statistics table had nowhere to store the space released by vacuuming. I have fixed that by increasing FSM space by 10x for now and will monitor vacuumdb -av output periodically to make sure the database stays within the bounds of the FSM. (2) I still think that the 8.0 default vacuum base threshold and scale mean that the pg_statistic table might never trigger the vacuum threshold and would like to check my understanding is correct: The process of analyzing my tables causes some or all of the rows in the pg_statistic table to be updated. . Each row update in postgres causes a new tuple to be created to represent the new row version. So each updated statistic makes the pg_stat_all_tables.n_tup_upd for pg_statistic climb by 1. It also makes the pg_class.reltuples for pg_statistic climb by 1 (new row version). From README.pg_autovacuum (and I think 22.1.4 of the 8.1.x document says the same thing): - If the number of (deletes + updates) > VacuumThreshold, then a vacuum analyze is performed. VacuumThreshold is equal to: vacuum_base_value + (vacuum_scaling_factor * "number of tuples in the table") In general deletes is 0 for the pg_statistic table so for the default values we're looking at n_tup_upd >= 1000 + (2.0 * reltuples) to trigger a vacuum. We know for each increment of n_tup_upd, reltuples will receive the same increment. Thus if the above expression isn't true for any non-zero starting values of n_tup_upd and reltuples, it won't ever be true: If: n_tup_upd < 1000 + (2.0 * reltuples) Then: n_tup_upd+X < 1000 + (2.0 * (reltuples+X)) I see that in 8.1.x this has been resolved by defaulting the scale to 0.4. Rightly or wrongly I have set my scale to 0.3. Thanks for all the help you've offered so far. Robin
On Sat, Feb 25, 2006 at 08:15:15AM +0000, Robin Iddon wrote: > From README.pg_autovacuum (and I think 22.1.4 of the 8.1.x document > says the same thing): > > - If the number of (deletes + updates) > VacuumThreshold, then a > vacuum analyze is performed. > > VacuumThreshold is equal to: > vacuum_base_value + (vacuum_scaling_factor * "number of tuples in > the table") By "number of tuples in the table", I'm pretty sure they mean visible tuples; ie: the row count. In any case, I've seen pg_autovacuum work with my own eyes, so as long as it stays running long enough there's no reason it shouldn't end up vacuuming pg_statistics. > In general deletes is 0 for the pg_statistic table so for the default > values we're looking at > > n_tup_upd >= 1000 + (2.0 * reltuples) > > to trigger a vacuum. > > We know for each increment of n_tup_upd, reltuples will receive the same > increment. Thus if the above expression isn't true for any non-zero > starting values of n_tup_upd and reltuples, it won't ever be true: > > If: > n_tup_upd < 1000 + (2.0 * reltuples) > Then: > n_tup_upd+X < 1000 + (2.0 * (reltuples+X)) > > I see that in 8.1.x this has been resolved by defaulting the scale to > 0.4. Rightly or wrongly I have set my scale to 0.3. You should probably drop the threshold to something closer to 200, too. -- Jim C. Nasby, Sr. Engineering Consultant jnasby@pervasive.com Pervasive Software http://pervasive.com work: 512-231-6117 vcard: http://jim.nasby.net/pervasive.vcf cell: 512-569-9461