Brad Nicholson <bnichols@ca.afilias.info> writes:
> Scenario - a large table was not being vacuumed correctly, there now ~
> 15 million dead tuples that account for approximately 20%-25% of the
> table. Vacuum appears to be stalling - ran for approximately 10 hours
> before I killed it. I hooked up to the process with gdb and this looks
> a bit suspicious to me.
Looks perfectly normal to me. Reads in btbulkdelete are exactly where
I'd expect 7.4's vacuum to be spending the bulk of its wait time on a
large table, because that's a logical-order traversal of the index, and
cannot benefit from any sequential-access advantage. (As of 8.2 we
are able to do this with a physical-order traversal, which can be a
whole lot faster.)
If you can afford to lock the table against writes for awhile,
reindexing might help by bringing the index back into physical order.
regards, tom lane