Обсуждение: does VACUUM ANALYZE complete with this error?

Поиск
Список
Период
Сортировка

does VACUUM ANALYZE complete with this error?

От
Susan Russo
Дата:
Hi,

We're seeing these type of error messages:

NOTICE:  number of page slots needed (237120) exceeds max_fsm_pages (120000)
HINT:  Consider increasing the configuration parameter "max_fsm_pages" to a value over 237120.
vacuumdb: vacuuming database "fb_2007_01_17"

I've played 'catch up' wrt adjusting max_fsm_pages (seems to be a regular event),
however am wondering if the vacuum analyze which reports the error was
actually completed?

Thanks
Susan Russo


Re: does VACUUM ANALYZE complete with this error?

От
Tom Lane
Дата:
Susan Russo <russo@morgan.harvard.edu> writes:
> We're seeing these type of error messages:

> NOTICE:  number of page slots needed (237120) exceeds max_fsm_pages (120000)
> HINT:  Consider increasing the configuration parameter "max_fsm_pages" to a value over 237120.
> vacuumdb: vacuuming database "fb_2007_01_17"

> I've played 'catch up' wrt adjusting max_fsm_pages (seems to be a regular event),

What PG version is that?  I recall we fixed a problem recently that
caused the requested max_fsm_pages to increase some more when you'd
increased it to what the message said.

            regards, tom lane

Re: does VACUUM ANALYZE complete with this error?

От
Vivek Khera
Дата:
On May 23, 2007, at 9:26 AM, Susan Russo wrote:

> I've played 'catch up' wrt adjusting max_fsm_pages (seems to be a
> regular event),
> however am wondering if the vacuum analyze which reports the error was
> actually completed?

Yes, it completed.  However not all pages with open space in them are
accounted for, so you will probably end up allocating more pages than
you otherwise would have.  I take it as a sign of not running vacuum
often enough so that the pages with available space get filled up
sooner.

I'd bump fsm pages up to perhaps double or triple what you've got
now, and try running vacuum a bit more often on your hottest tables
(or even consider changing your table structure to limit the
"hotness" of that table).  For example, if the table has a lot of
columns, but only one is updated often, split that column out into
its own table so that you have all the changes clustered into the
same set of pages while leaving the rest of the columns in place.


Re: does VACUUM ANALYZE complete with this error?

От
Susan Russo
Дата:
Hi Tom,

>What PG version is that?  I recall we fixed a problem recently that
>caused the requested max_fsm_pages to increase some more when you'd
>increased it to what the message said.

8.1.4

As Vivek suggested, we are implementing more regular vacuuming.

Thanks!
Susan

Re: does VACUUM ANALYZE complete with this error?

От
Tom Lane
Дата:
Susan Russo <russo@morgan.harvard.edu> writes:
>> What PG version is that?  I recall we fixed a problem recently that
>> caused the requested max_fsm_pages to increase some more when you'd
>> increased it to what the message said.

> 8.1.4

OK, I checked the CVS history and found this:

2006-09-21 16:31  tgl

    * contrib/pg_freespacemap/README.pg_freespacemap,
    contrib/pg_freespacemap/pg_freespacemap.c,
    contrib/pg_freespacemap/pg_freespacemap.sql.in,
    src/backend/access/gin/ginvacuum.c,
    src/backend/access/gist/gistvacuum.c,
    src/backend/access/nbtree/nbtree.c, src/backend/commands/vacuum.c,
    src/backend/commands/vacuumlazy.c,
    src/backend/storage/freespace/freespace.c,
    src/include/storage/freespace.h: Fix free space map to correctly
    track the total amount of FSM space needed even when a single
    relation requires more than max_fsm_pages pages.  Also, make VACUUM
    emit a warning in this case, since it likely means that VACUUM FULL
    or other drastic corrective measure is needed.    Per reports from
    Jeff Frost and others of unexpected changes in the claimed
    max_fsm_pages need.

This is in 8.2, but we didn't back-patch because it made incompatible
changes in the contrib/pg_freespacemap views.

As the commit message says, the behavior of having the requested
max_fsm_pages value move up after you increase the setting is triggered
by having individual tables that need more than max_fsm_pages.  So you
definitely have got a problem of needing more vacuuming...

            regards, tom lane

Re: does VACUUM ANALYZE complete with this error?

От
Susan Russo
Дата:
Hi Tom - thanks for the additional/confirming info.

>So you definitely have got a problem of needing more vacuuming...

Yes, we're going to nightly, as I said in last message, however,
it worse than this.....

I found that *1* vacuum analyze works well in many instances to
help optimize query performance (which in one example was running
in lightening speed on 2 of our 5 identical software/hardware/configs
Pg 8.1.4 servers).  However, in several cases, a *2nd* vacuum
analyze was necessary.  (btw - first vacuum was after adjusting
max_fsm_pages, and getting no error msgs from vacuum).

I *think* - please advise, I may be able to affect configs
for a more effective vacuum analyze the first time around (??)
Perhaps an increase to deafult_statistics_target (set to 100??).

I'd read that when performing a vacuum analyze, Pg doesn't actually
go through all values in each table and update statistics, rather,
it samples some of the values and uses that statistical sample.
Thus, different runs of the vacuum analyze might generate different
statistics (on different dbs on different servers) since the same db
may be used differently on a different server.   Is this correct??

Thanks for any advice....I'm hoping regular duplicate vacuum
analyze isn't the solution...

Susan

Re: does VACUUM ANALYZE complete with this error?

От
Scott Marlowe
Дата:
Susan Russo wrote:
> Hi Tom - thanks for the additional/confirming info.
>
>
>> So you definitely have got a problem of needing more vacuuming...
>>
>
> Yes, we're going to nightly, as I said in last message, however,
> it worse than this.....
>
> I found that *1* vacuum analyze works well in many instances to
> help optimize query performance (which in one example was running
> in lightening speed on 2 of our 5 identical software/hardware/configs
> Pg 8.1.4 servers).  However, in several cases, a *2nd* vacuum
> analyze was necessary.  (btw - first vacuum was after adjusting
> max_fsm_pages, and getting no error msgs from vacuum).
>
> I *think* - please advise, I may be able to affect configs
> for a more effective vacuum analyze the first time around (??)
> Perhaps an increase to deafult_statistics_target (set to 100??).
>
> I'd read that when performing a vacuum analyze, Pg doesn't actually
> go through all values in each table and update statistics, rather,
> it samples some of the values and uses that statistical sample.
> Thus, different runs of the vacuum analyze might generate different
> statistics (on different dbs on different servers) since the same db
> may be used differently on a different server.   Is this correct??
>
> Thanks for any advice....I'm hoping regular duplicate vacuum
> analyze isn't the solution...
Couple -o- points

 Update your pg servers.  8.1.9 is out, and there's plenty of bugs fixed
between 8.1.4 and 8.1.9 that you should update.  It's relatively
painless and worth the effort.

 I get the feeling you think vacuum and analyze are still married.
They're not, they got divorced around 7.3 or so.  Used to be to run
analyze you needed vacuum.  Now you can either one without the other.

 Vacuum is more expensive than analyze.  Since vacuum reclaims lost
tuples, it has to do more work than analyze which only has to do a quick
pass over a random sampling of the table, hence you are right in what
you heard, that from one run to the next the data analyze returns will
usually be a bit different.  Increasing the default stats target allows
analyze to look at more random samples and get a more accurate report on
the values and their distributions in the table.  This comes at the cost
of slightly greater analyze and query planning times.