"HT" <htlevine@ebates.com> writes:
> We have quite large production Postgres 7.2 DB which is out of control in
> terms of disk consumption. We made it thru the holiday shopping season,
> but it isn't over yet. We have taken the DB down once for a vacuum analyze
> but only vacuum'd 2 large tables which took FIVE HOURS WITH NO
> RESULTS.
1. You don't need to take down the DB to do vacuuming.
2. What do you mean by "WITH NO RESULTS"?
> Posts to the newsgroup advised that I crank up the max_fsm_pages. Right
> now it is at roughly 65,000.
> select relname, relpages from pg_class where relkind in ('r', 't', 'i')
> users | 408711
> merchant_sessions | 236333
> batch_load_awaiting | 173785
> orders | 92241
If you have not been vacuuming regularly then these relpages figures
cannot be trusted too much, but it looks to me like you might need
max_fsm_pages nearer to 1 million than 64k. If it's not large enough
to cover all (or at least nearly all) pages with free space, then you'll
have space-leakage problems. What is the tuple update/deletion rate in
these tables, anyway?
Also, you should probably think about updating to 7.3.1 sometime soon.
There's a performance problem in the 7.2.* FSM code that shows up when
a single table has more than ~10000 pages with useful amounts of free
space --- VACUUM takes an unreasonable amount of time to record the free
space.
regards, tom lane