max_fsm_pages Sanity Check

Поиск
Список
Период
Сортировка
От HT
Тема max_fsm_pages Sanity Check
Дата
Msg-id aum461$krk$1@news.hub.org
обсуждение исходный текст
Ответы Re: max_fsm_pages Sanity Check
Список pgsql-admin
Background:
We have quite large production Postgres 7.2 DB  which is out of control in
terms of disk consumption.   We made it thru the holiday shopping season,
but it isn't over yet.   We have taken the DB down once for a vacuum analyze
but only vacuum'd  2 large tables which took FIVE HOURS WITH NO RESULTS.
Posts to the newsgroup advised that I crank up the max_fsm_pages.   Right
now it is at roughly 65,000.  So I went to the postgres.org site and
searched the newsgroups....

Where I am now:
I've searched the newsgroup archives for info on the max_fsm_pages setting
and it seems that other than what the max_fsm_pages value means, I found no
definitive answer as to exactly "how" to best determine the optimal setting.
Nor could I find adequate documentation on it.  It seems I saw alot of
people asking others to "report back with your findings"  but I don't find
the results or followup in the newsgroup (please point me to an item if I'm
mistaken).   Seeing as I'm not entirely into playing guinea pig with my
production system.... I welcome expert/been-there-only-non-theoritical
advice.


Here are some numbers, followed by a few questions:

I.  Looking at my production DB with the following query:

select relname, relpages  from pg_class where relkind in ('r', 't', 'i')

             relname                          | relpages
---------------------------------+----------
 users                                         |   408711
 merchant_sessions                     |   236333
 batch_load_awaiting                  |   173785
 orders                                       |    92241


II.  But here is the same from my StandBy DB  (restored from a pg_dump of
production)    ..... hmmmm not exactly what I would expect?

             relname                          | relpages
---------------------------------+----------
 merchant_sessions                      |   615588
 users                                          |   202696
 batch_load_awaiting                   |   143735
 orders                                        |   130894


Question:    Now, why wouldn't a pg_restore into my standby db have smaller
page sizes than the live one which is bloated and consuming tons of disk
space?

III.  The results of a vacuum on the users table (production a couple
weekends ago)  yielded the below (not including all the index output):
2002-12-15 03:22:18 [22450]  NOTICE:  Removed 3254600 tuples in 295053
pages.
        CPU 111.50s/124.03u sec elapsed 2721.98 sec.
2002-12-15 03:22:18 [22450]  NOTICE:  Pages 408711: Changed 152946, Empty 0;
Tup
 4126716: Vac 3254600, Keep 0, UnUsed 28559.
        Total CPU 338.16s/1091.28u sec elapsed 8502.90 sec.


Question:   So should I hike my fsm up to 1,000,000 pages?   Is this too
high of a value or will it be ok?  If it is too big, then How big is big?


I will be most happy to summarize my results back to the newsgroup when I
make this change and do a full vacuum.   We cannot do a full vacuum without
taking the site down which will have to wait till this weekend or next.

Thanks in Advance



В списке pgsql-admin по дате отправления:

Предыдущее
От: Doran Barton
Дата:
Сообщение: Re: psql to 7.2.3 from 7.3.1
Следующее
От: Tom Lane
Дата:
Сообщение: Re: max_fsm_pages Sanity Check