Appreciate it, guys. I understand it being large isn't itself a problem, but relative to history and the lack of real changes, it's just strange and I'd like to better understand what is going on...
I tracked it down to a specific table, and then doing a VACUUM FULL ANALYZE on that table yields: 108765 dead row versions cannot be removed yet.
Which strikes me as odd. Any reading I can do to better understand why so many (relative to the overall table size) dead rows cannot be removed?
Wells Oliver <wells.oliver@gmail.com> writes: > Yeah, trying to figure out what actual table is clearly in need of a vacuum > b/c of the size of that toast table.
Something like
select relname from pg_class where reltoastrelid = 'pg_toast.pg_toast_NNN'::regclass;
(or, if you have potential duplicate relnames, select oid::regclass ...)
The mere fact that it's big does not indicate a problem, though.