Обсуждение: autovacuum hung on simple tables

Поиск
Список
Период
Сортировка

autovacuum hung on simple tables

От
senor
Дата:
Hi All,

I'm still trying to get a better understanding of the autovacuum process.
This is a different postgres installation as my previous posts and confusing me in new ways.
Still 11.4 running on CentOS 7 and 8 nvme in software raid

This issue started with postgres "...not accepting commands to avoid wraparound...".
On this server I was able to stop all access to DB and dedicate resources to only postgres. I thought I could allow
autovacuumto do its thing with a ton of workers.  

I think everything boils down to 2 questions:
1. Can autovacuum or manual vacuum be coerced into dealing with oldest first?
    1a. Where might I find advice on configuring postgres resources for maximum cpu & memory maintenance use. In other
wordsquickest path out of "not accepting commands" land. Besides increasing autovacuum_freeze_max_age. 
2. What can cause autovacuum to stall? Could associated toast or index bne the cause.

It appeared that autovacuum was not choosing the tables with the oldest xmin so I produced an ordered list of oldest
tableswith: 
SELECT oid::regclass, age(relfrozenxid)
FROM pg_class
WHERE relkind IN ('r', 't', 'm')
AND age(relfrozenxid) > 2000000000
ORDER BY 2 DESC

The list contained over 6000 tables from pg_toast. They all belonged to daily reports tables. The reports are created
dailyand not touched again. 

Most of the autovacuums that did start seem to be hung. Never completing even on the simplest tables.
The newest 2 autovacuums in the list are completing about one every couple seconds.
CPU and disk IO are nearly idle.
An example table is shown here:

phantom=# select
phantom-#       pg_size_pretty(pg_total_relation_size(relid)) as total_size,
phantom-#       pg_size_pretty(pg_relation_size(relid, 'main')) as relation_size_main,
phantom-#       pg_size_pretty(pg_relation_size(relid, 'fsm')) as relation_size_fsm,
phantom-#       pg_size_pretty(pg_relation_size(relid, 'vm')) as relation_size_vm,
phantom-#       pg_size_pretty(pg_relation_size(relid, 'init')) as relation_size_init,
phantom-#       pg_size_pretty(pg_table_size(relid)) as table_size,
phantom-#       pg_size_pretty(pg_total_relation_size(relid) - pg_relation_size(relid)) as external_size
phantom-#  from
phantom-#       pg_catalog.pg_statio_user_tables
phantom-# where
phantom-#   relname like 'report_user_439';
 total_size | relation_size_main | relation_size_fsm | relation_size_vm | relation_size_init | table_size |
external_size

------------+--------------------+-------------------+------------------+--------------------+------------+---------------
 80 kB      | 8192 bytes         | 24 kB             | 8192 bytes       | 0 bytes            | 48 kB      | 72 kB
(1 row)


I scripted a vacuum loop using the oldest table list. It's extremely slow but it was making better progress than
autovacuumwas. 

Using ps I see that there were as many worker processes as defined with autovacuum_max_workers but pg_stat_activity
consistantlyshowed 19. I killed the script thinking there might be a conflict. I saw no difference after 30 minutes so
restartedscript. Never saw anything in pg_stat_progress_vacuum. 

vacuum settings:
                name                 |  setting
-------------------------------------+-----------
 autovacuum                          | on
 autovacuum_analyze_scale_factor     | 0.1
 autovacuum_analyze_threshold        | 50
 autovacuum_freeze_max_age           | 200000000
 autovacuum_max_workers              | 40
 autovacuum_multixact_freeze_max_age | 400000000
 autovacuum_naptime                  | 4
 autovacuum_vacuum_cost_delay        | 0
 autovacuum_vacuum_cost_limit        | 5000
 autovacuum_vacuum_scale_factor      | 0.2
 autovacuum_vacuum_threshold         | 50
 autovacuum_work_mem                 | -1
 log_autovacuum_min_duration         | 0
 vacuum_cleanup_index_scale_factor   | 0.1
 vacuum_cost_delay                   | 0
 vacuum_cost_limit                   | 200
 vacuum_cost_page_dirty              | 20
 vacuum_cost_page_hit                | 1
 vacuum_cost_page_miss               | 10
 vacuum_defer_cleanup_age            | 0
 vacuum_freeze_min_age               | 50000000
 vacuum_freeze_table_age             | 150000000
 vacuum_multixact_freeze_min_age     | 5000000
 vacuum_multixact_freeze_table_age   | 150000000

I'm now thinking that autovacuum getting hung up is what caused the issue to begin with. I see nothing but the
successfulvacuums from the script and my own fat-fingering commands in the postgres logs (set at info). 

Any hints are appreciated.
Senor


Re: autovacuum hung on simple tables

От
Laurenz Albe
Дата:
On Fri, 2022-11-04 at 02:50 +0000, senor wrote:
> I'm still trying to get a better understanding of the autovacuum process. 
> This is a different postgres installation as my previous posts and confusing me in new ways.
> Still 11.4 running on CentOS 7 and 8 nvme in software raid

Not good.  That should be 11.17.  You are missing over two years of important bug fixes,
which could be part of the problem.

> This issue started with postgres "...not accepting commands to avoid wraparound...".

That does not happen normally.  Something must have prevented autovacuum from succeeding:

- data corruption that makes VACUUM fail
- a prepared transaction or long running transaction that prevents PostgreSQL from
  cleaning up
- a long running session with a temporary table
- autovacuum is too slow to cope with the rate at which dead tuples are generated

> On this server I was able to stop all access to DB and dedicate resources to only postgres.
> I thought I could allow autovacuum to do its thing with a ton of workers.

If autovacuum is too slow, this will take a long time.  Note that in v11, the default setting
for "autovacuum_vacuum_cost_delay" is still 20ms, which makes autovacuum pretty slow.
Later, you show that you have "autovacuum_vacuum_cost_delay" set to 0, which is good.
Did you change that recently, so that some workers started before that change?

You should also crank up "maintenance_work_mem" for autovacuum to be fast.

> I think everything boils down to 2 questions:
> 1. Can autovacuum or manual vacuum be coerced into dealing with oldest first?

Manual VACUUM is the easiest.  You can start it on the table with the oldest
"relfrozenxid" first.

>     1a. Where might I find advice on configuring postgres resources for maximum cpu &
>         memory maintenance use. In other words quickest path out of "not accepting commands"
>         land. Besides increasing autovacuum_freeze_max_age.

The documentation has a lot about that.
The quickest way is to run VACUUM on the table with the oldest "relfrozenxid" manually.
I would run VACUUM (VERBOSE), so that you get information if it can clean up or
freeze anything.

> 2. What can cause autovacuum to stall? Could associated toast or index bne the cause.

Potentially data corruption could send a backend into an endless loop.

> It appeared that autovacuum was not choosing the tables with the oldest xmin so I
> produced an ordered list of oldest tables with:
> SELECT oid::regclass, age(relfrozenxid)
> FROM pg_class
> WHERE relkind IN ('r', 't', 'm')
> AND age(relfrozenxid) > 2000000000
> ORDER BY 2 DESC
> 
> The list contained over 6000 tables from pg_toast. They all belonged to daily
> reports tables. The reports are created daily and not touched again.
> 
> Most of the autovacuums that did start seem to be hung. Never completing even on the simplest tables. 
> The newest 2 autovacuums in the list are completing about one every couple seconds.
> CPU and disk IO are nearly idle.

To see if they are hung, look at "wait_event", "wait_event_type" and "state" in the
"pg_stat_activity" rows for the autovacuum workers.

High locks on a table have the potential to block an anti-wraparound autovacuum.
Again, check for log running and prepared transactions.


> I scripted a vacuum loop using the oldest table list. It's extremely slow but it was
> making better progress than autovacuum was.
> 
> Using ps I see that there were as many worker processes as defined with autovacuum_max_workers
> but pg_stat_activity consistantly showed 19. I killed the script thinking there might be a conflict.
> I saw no difference after 30 minutes so restarted script.

I am not sure what exactly you are actually doing here, but you should know that there
can only be one VACUUM process per table.  If there is already an anti-wraparound autovacuum
running on the table, a manual VACUUM will simple be blocked until the autovacuum worker
is done.

> Never saw anything in pg_stat_progress_vacuum.

Now that would be weird, except if VACUUM cannot get the required lock on the table.

> vacuum settings:
>                 name                 |  setting  
> -------------------------------------+-----------
>  autovacuum                          | on        
>  autovacuum_freeze_max_age           | 200000000 
>  autovacuum_max_workers              | 40        
>  autovacuum_naptime                  | 4         
>  autovacuum_vacuum_cost_delay        | 0         
>  autovacuum_vacuum_cost_limit        | 5000      
>  autovacuum_work_mem                 | -1        
>  vacuum_freeze_min_age               | 50000000  
>  vacuum_freeze_table_age             | 150000000 
> 
> I'm now thinking that autovacuum getting hung up is what caused the issue to begin with. I see nothing
> but the successful vacuums from the script and my own fat-fingering commands in the postgres
> logs (set at info).

Yours,
Laurenz Albe
-- 
Cybertec | https://www.cybertec-postgresql.com



Re: autovacuum hung on simple tables

От
senor
Дата:
I'm a little late getting back to this but still have no solution.
I 100% agree that updating postgres is best. The usual excuses for not updating apply. It will be done when it is
allowedto be done. It remains 11.4 for now. I read through the changelogs up through 11.18 and didn't see anything
obviouslyrelated. The underlying cause could be pretty involved and something I wouldn't recognize. 

Thank you Laurenz Albe for reminding me about an important clue. I had inadvertently executed a vacuum freeze on a
tablethat pg_stat_activity indicated was currently being autovacuumed. The manual vacuum succeeded while leaving the
previousautovacuum still showing active in pg_stat_activity. Manual vacuum always completes, never stalls, but also
oftendoes not appear in pg_stat_progress_vacuum unless it's a longer process. 

It appears the process completes the vacuum but does not register that fact. relfrozenxid of the main table is what
wouldbe expected but an associated toast table was still very old. Cancelling all pending vacuums of the table and
manuallyrunning vacuum completes in a few seconds and both the main relation and toast are updated as expected with
lastvacuum time updated in pg_stat_all_tables. pg_stat_progress_vacuum never had any entry. Autoacuum and autovacuum
analyzeboth get hung. 

I often see the same table listed multiple times in pg_stat_activity with different pids and:
state: active
backend_type: autovacuum worker
wait_event_type: blank
state_change: 10-15 seconds after backend_start - about how long it takes to manually run vacuum on most tables.

What exactly does autovacuum rely on in the stats file? I ran strace on a hung autovacuum process and saw a repeated
readof /run/postgresql/db_16384.stat (tempfs). The file is 740MB which is about the same as other similar installations
I'vereviewed. I'm lacking in overall experience in this though. 

One final oddity:
I ran a query for oldest relfrozenxid and redirected to file. The query took around a minute. A few seconds after it
finished,I queried for 'active' in pg_stat_activity and the oldest relfrozenxid query was still listed. A few seconds
laterit had cleared. 

Can a corrupted stats file prevent autovac from reading/writing?


> I scripted a vacuum loop using the oldest table list. It's extremely slow but it was
> making better progress than autovacuum was.
>
> Using ps I see that there were as many worker processes as defined with autovacuum_max_workers
> but pg_stat_activity consistantly showed 19. I killed the script thinking there might be a conflict.
> I saw no difference after 30 minutes so restarted script.

I am not sure what exactly you are actually doing here, but you should know that there
can only be one VACUUM process per table.  If there is already an anti-wraparound autovacuum
running on the table, a manual VACUUM will simple be blocked until the autovacuum worker
is done.

> Never saw anything in pg_stat_progress_vacuum.

Now that would be weird, except if VACUUM cannot get the required lock on the table.

> vacuum settings:
>                 name                 |  setting
> -------------------------------------+-----------
>  autovacuum                          | on
>  autovacuum_freeze_max_age           | 200000000
>  autovacuum_max_workers              | 40
>  autovacuum_naptime                  | 4
>  autovacuum_vacuum_cost_delay        | 0
>  autovacuum_vacuum_cost_limit        | 5000
>  autovacuum_work_mem                 | -1
>  vacuum_freeze_min_age               | 50000000
>  vacuum_freeze_table_age             | 150000000
>
> I'm now thinking that autovacuum getting hung up is what caused the issue to begin with. I see nothing
> but the successful vacuums from the script and my own fat-fingering commands in the postgres
> logs (set at info).

Sorry about the reply formatting. I tried using outlook web in Edge. Maybe that was a mistake.
Thanks,
Senor


Re: autovacuum hung on simple tables

От
Laurenz Albe
Дата:
On Mon, 2022-11-28 at 04:05 +0000, senor wrote:
> I'm a little late getting back to this but still have no solution.
> 
> I had inadvertently executed a vacuum freeze on a table that pg_stat_activity
> indicated was currently being autovacuumed. The manual vacuum succeeded while
> leaving the previous autovacuum still showing active in pg_stat_activity.

No, that cannot be.  VACUUM processes are mutually exclusive.
It could have been another autovacuum worker (different "pid").

> Manual vacuum always completes, never stalls, but also often does not appear
> in pg_stat_progress_vacuum unless it's a longer process.

It *does* appear in "pg_stat_progress_vacuum", but perhaps you are too slow
to catch it.

> It appears the process completes the vacuum but does not register that fact.
> relfrozenxid of the main table is what would be expected but an associated
> toast table was still very old. Cancelling all pending vacuums of the table
> and manually running vacuum completes in a few seconds and both the main
> relation and toast are updated as expected with last vacuum time updated
> in pg_stat_all_tables.

Obviously autovacuum devided to process the main table, but not the TOAST
table.  That is normal.  Manual VACUUM processes both, unless you specify
the option PROCESS_TOAST OFF.

> Autoacuum and autovacuum analyze both get hung.

No, they don't; not unless they are anti-wraparound autovacuum runs and you
are holding high locks on the table in a long running transaction.
They may be slow to complete, because in v11 autovacuum by default is very
slow indeed, as "autovacuum_vacuum_cost_delay" is 20ms.

> I often see the same table listed multiple times in pg_stat_activity with
> different pids and:
> state: active
> backend_type: autovacuum worker
> wait_event_type: blank
> state_change: 10-15 seconds after backend_start - about how long it takes
> to manually run vacuum on most tables.

That can be parallel workers that are used to scan indexes.

> What exactly does autovacuum rely on in the stats file? I ran strace on a
> hung autovacuum process and saw a repeated read of
> /run/postgresql/db_16384.stat (tempfs). The file is 740MB which is about
> the same as other similar installations I've reviewed. I'm lacking in overall
> experience in this though.

I don't know either, and I am too lazy to read the code on that, but I suspect
that it has no connection to your problem.

> One final oddity:
> I ran a query for oldest relfrozenxid and redirected to file. The query took
> around a minute. A few seconds after it finished, I queried for 'active' in
> pg_stat_activity and the oldest relfrozenxid query was still listed.
> A few seconds later it had cleared.

That sounds weird and is hard to believe.
Are the disk or the CPU under extreme stress?

Yours,
Laurenz Albe
-- 
Cybertec | https://www.cybertec-postgresql.com