Following up.
The part that troubles me really is that vacuum full doesn't actually fix the problem. If there were bad data that had been corrected via mass updates, I'd expect the bloat issue to be fixed by a vacuum full.
When I run the vacuum back to back, this is what I get:
db_name=# VACUUM FULL VERBOSE table_schema.table_name;
INFO: vacuuming "table_schema.table_name"
INFO: "table_name": found 2 removable, 29663 nonremovable row versions in 1754 pages
DETAIL: 0 dead row versions cannot be removed yet.
CPU 0.07s/0.10u sec elapsed 0.30 sec.
VACUUM
db_name=# VACUUM FULL VERBOSE table_schema.table_name;
INFO: vacuuming "table_schema.table_name"
INFO: "table_name": found 0 removable, 29663 nonremovable row versions in 1754 pages
DETAIL: 0 dead row versions cannot be removed yet.
CPU 0.09s/0.09u sec elapsed 0.32 sec.
VACUUM
I think the question to address may be: "Why does the check_postgres query think there should only be 334 pages instead of 1754?"