Hi. I've been patient. PG is 9.0.17, updated via Yum yesterday.
One of my large tables (101 GB on disk, about 1.1 billion rows) used to take too long to vacuum. Not sure if it's an index corruption issue. But I tried VACUUM FULL ANALYZE as recommended in another thread yesterday, which took 5 hours on the two times I tried, without finishing.
Now the REINDEX TABLE has taken over 6 hours as I decided to be patient and just let something finish. Not sure this is normal though! How do production level DBAs do this if it takes so long?
If I open another SSH window to my server and try "select * from pg_stats_activity" it just hangs there, as the REINDEX I presume is taking up all the memory? I basically can't do anything else on this server.
From my experience REINDEX on a 100GB table with such a hardware will definitely take hours.
It might be actually CPU bound, not I/O, if you have a large functional index on a table (like lower(text_column)),
and since PostgreSQL can only take use of a single core - you are out of luck.
In order to speed up the process without locking your data, you may consider create the new index with create index concurrently,
and then just drop the old one (make sure your DROP won't wait trying to acquire a lock when you do it, otherwise
other processes will start to queue after it).
I'd question the usefulness of running VACUUM FULL on a production server (there are other ways around, i.e