Обсуждение: Determining right size for max_fsm_pages on large setup?

Поиск
Список
Период
Сортировка

Determining right size for max_fsm_pages on large setup?

От
Francisco Reyes
Дата:
Is there a way to determine the needed amount of fsm pages?

I have a database that I started with 1,000,000 max_fsm_pages and I was
doing vacuum analyze verboze daily. Checked every couple of days to make
sure we had the right amount of fsm pages.

A few days ago I noticed that we got the notice "Consider increasing the
configuration parameter" and it recommended 1,216,864. So I first increased
it to 1.5 Million and changed to do two vacuum analyze per day.

Ever since I have been gradually increasing the numbers of max_fsm_pages
and every time we get about 200,000 over what I have set. Finally tried
going from 3 Millinon to 5 Million at it still suggests
that I need about 200K more than what I have.

Also have decreased the autovacuum_vacuum_scale_factor to 0.1, but given
that this database is already over 100GB I am thinking to make
autovacuum_vacuum_scale_factor 0.01.

Have 3 100GB+ databases and growing.

I also have set
autovacuum_vacuum_threshold = 50000     # min # of tuple updates before
autovacuum_analyze_threshold = 100000   # min # of tuple updates before

but doesn't seem to be helping.

Right now we are migrating millions of records from an old system so there
is a very high number of inserts/updates/deletes (updates/delete on some
temporary tables, but once inserted the data is never changed).

In the ouput of vacuum full analyze I see:
Current limits are:  5000000 page slots, 1000 relations, using 29362 KB.
Is that space in disk or memory?

Re: Determining right size for max_fsm_pages on large setup?

От
Vivek Khera
Дата:
On Sep 8, 2006, at 4:29 PM, Francisco Reyes wrote:

> I have a database that I started with 1,000,000 max_fsm_pages and I
> was doing vacuum analyze verboze daily. Checked every couple of
> days to make sure we had the right amount of fsm pages.
> A few days ago I noticed that we got the notice "Consider
> increasing the configuration parameter" and it recommended
> 1,216,864. So I first increased it to 1.5 Million and changed to do
> two vacuum analyze per day.
> Ever since I have been gradually increasing the numbers of
> max_fsm_pages and every time we get about 200,000 over what I have
> set. Finally tried going from 3 Millinon to 5 Million at it still
> suggests that I need about 200K more than what I have.

how many actual pages of data (and index) do you have?  you want to
see how many relpages you're relations are using in the pg_class
view.  i would venture to say if you have deleted rows in every
single page, you'd want at least one entry per page of your table(s).

once you're done with your migration, you should consider running a
cluster on the big tables to pack them more tightly and get rid of
some of your pages with empty space.


Вложения

Re: Determining right size for max_fsm_pages on large

От
Francisco Reyes
Дата:
Vivek Khera writes:

> how many actual pages of data (and index) do you have?  you want to
> see how many relpages you're relations are using in the pg_class
> view.

In 1 database:
select relname,relpages,reltuples from pg_class order by reltuples desc
limit 7;
             relname              | relpages |  reltuples
----------------------------------+----------+-------------
 pg_toast_988195_index            |   148159 | 3.87392e+07
 pg_toast_988195                  |  9551848 | 3.87391e+07
 pg_toast_988154_index            |   123991 | 3.24807e+07
 pg_toast_988154                  |  8113192 | 3.24807e+07
 msgrcpnts_mssgid_recpttp_recptid |    11192 | 2.52415e+06
 messagerecipients                |    16078 | 2.52415e+06
 messagerecipients_pkey           |     6375 | 2.52415e+06



In another:
select relname,relpages,reltuples from pg_class order by reltuples desc
limit 7;
          relname          | relpages | reltuples
---------------------------+----------+------------
 pg_toast_2352480          | 14902626 | 3.0651e+07
 pg_toast_2352480_index    |   222896 | 3.0651e+07
 archiveque_host           |    11848 |     831494
 archiveque_messageid      |    13223 |     831494
 archiveque_pkey           |     4732 |     831494
 archiveque                |   161097 |     831494
 archiveque_archive_que_id |     5119 |     831494


>  i would venture to say if you have deleted rows in every
> single page, you'd want at least one entry per page of your table(s).

So about 15 million according to the above?

> once you're done with your migration, you should consider running a
> cluster on the big tables

Plan to.
Data will divide nicely by dates. All searches will
have date parameters.

After the migration is done and it is stable I should bring the value down?