Обсуждение: Autovacuum and invalid page header

Поиск
Список
Период
Сортировка

Autovacuum and invalid page header

От
Ireneusz Pluta
Дата:
I had a problem with neverending forced autovacuum process, running as
preventing xid wraparound. As I finally (?) found, following some
advices given here: ->
http://forums.enterprisedb.com/posts/list/2028.page, that autovacuum in
question was not just one autovacuum, but many different autovacuums on
the same table cycling in start-fail scenario because of invalid page
headers of some indexes of the table. Manual VACUUM VERBOSE said me
that, now I can also see that in related ERROR-CONTEXT log message
pairs. I dropped the damaged indexes and now that autovacuum seems to
continue without repeating itself.

Anyway, another questions come from that situation and I'd like to
discuss them here and get some advice.

First, it seems that an unnoticed damage of a relation, causing
autovacuum failures, which might initially be harmless for the whole
cluster (as far as only damaged relation and its disk space is
concerned), may lead to a critical situation when a wraparound gets
close, and autovacuum is in prevenitng mode can't freeze its xids. The
forced autovacuum insists on vacuuming the table which can not be
vacuumed and locks itself there. At the same time, autovacuum get
sticked with a database of the damaged relation, and "forgets" about
other databases which might need me vacuumed. From this point of view,
just one damaged table might be a single point of failure of the whole
cluster. Is there any way, other than tracing logs, to prevent such a
situation?

Another issue, rather loosely related to the topic. I suspect that the
cluster may have more invalid page headers like that. They might be
caused by past bad sector failures of one of the drives of my raid
array. So I think it might be a good idea to check the whole cluster,
page by page, for invalid page headers. Is there any ready tool, which,
for instance when given a path to database cluster, would traverse all
cluster directories and files and check all page headers? I probably
answered myself - manual VACUUM [VERBOSE] would do - but it fails when
finds the first invalid header, so I would have. That's why I am looking
for something only scanning and reporting invalid headers.

Yet another issue: how could that happen, the bad sectors of a RAID10
member drive caused page damages? (as far as I am right in this
assumption, of course, but I have a reason to think so). Should not the
RAID array prevent such a file damage?

Thanks

Irek.

Re: Autovacuum and invalid page header

От
"Kevin Grittner"
Дата:
Ireneusz Pluta <ipluta@wp.pl> wrote:

> many different autovacuums on the same table cycling in start-fail
> scenario because of invalid page headers of some indexes of the
> table. Manual VACUUM VERBOSE said me that, now I can also see that
> in related ERROR-CONTEXT log message pairs. I dropped the damaged
> indexes and now that autovacuum seems to continue without
> repeating itself.

> First, it seems that an unnoticed damage of a relation, causing
> autovacuum failures, which might initially be harmless for the
> whole cluster (as far as only damaged relation and its disk space
> is concerned), may lead to a critical situation when a wraparound
> gets close, and autovacuum is in prevenitng mode can't freeze its
> xids. The forced autovacuum insists on vacuuming the table which
> can not be vacuumed and locks itself there. At the same time,
> autovacuum get sticked with a database of the damaged relation,
> and "forgets" about other databases which might need me vacuumed.
> From this point of view, just one damaged table might be a single
> point of failure of the whole cluster. Is there any way, other
> than tracing logs, to prevent such a situation?

That does sound like an area where there might be room for
improvement within PostgreSQL; however, I strongly recommend that
you have *some* sort of periodic VACUUM ANALYZE VERBOSE of any
active database, and that you scan the results (we use grep) to look
for problems.  We send an email to the DBA team if errors or warning
show in the VACUUM ANALYZE VERBOSE.

> Another issue, rather loosely related to the topic. I suspect that
> the cluster may have more invalid page headers like that. They
> might be caused by past bad sector failures of one of the drives
> of my raid array. So I think it might be a good idea to check the
> whole cluster, page by page, for invalid page headers. Is there
> any ready tool, which, for instance when given a path to database
> cluster, would traverse all cluster directories and files and
> check all page headers? I probably answered myself - manual VACUUM
> [VERBOSE] would do - but it fails when finds the first invalid
> header, so I would have. That's why I am looking for something
> only scanning and reporting invalid headers.

I don't know of anything, but you might try searching pgfoundry.

> Yet another issue: how could that happen[?]

That's the big question.  You want to try very hard to answer it,
because otherwise you'll probably be going through this all again
soon.  It could be RAM, RAID controller, OS, or a bad drive.  It
could also be an abnormal system shutdown (OS crash or power loss)
if you have an unsafe configuration.  If you don't want to see
problems like this, don't run with fsync or full_page_writes set to
"off".

-Kevin

Re: Autovacuum and invalid page header

От
Alvaro Herrera
Дата:
Excerpts from Kevin Grittner's message of jue may 13 10:25:04 -0400 2010:
> Ireneusz Pluta <ipluta@wp.pl> wrote:
>
> > many different autovacuums on the same table cycling in start-fail
> > scenario because of invalid page headers of some indexes of the
> > table. Manual VACUUM VERBOSE said me that, now I can also see that
> > in related ERROR-CONTEXT log message pairs. I dropped the damaged
> > indexes and now that autovacuum seems to continue without
> > repeating itself.

> That does sound like an area where there might be room for
> improvement within PostgreSQL;

Yeah, maybe we should make it put the failed table at the end of the
list, for the next run.  This is not simple to implement, if only
because autovac workers don't have any way to persist state from one run
to the next.  But this kind of thing causes enough problems that it's
probably worth it.

One thing to keep in mind, though, is that a persistent error in a
single table is enough to keep a database's datfrozenxid from advancing,
and thus shut down in case the wraparound horizon comes too close.  So
perhaps what we need is more visibility into autovacuum problems.
--

Re: Autovacuum and invalid page header

От
Tom Lane
Дата:
Alvaro Herrera <alvherre@alvh.no-ip.org> writes:
> Yeah, maybe we should make it put the failed table at the end of the
> list, for the next run.  This is not simple to implement, if only
> because autovac workers don't have any way to persist state from one run
> to the next.  But this kind of thing causes enough problems that it's
> probably worth it.

> One thing to keep in mind, though, is that a persistent error in a
> single table is enough to keep a database's datfrozenxid from advancing,
> and thus shut down in case the wraparound horizon comes too close.  So
> perhaps what we need is more visibility into autovacuum problems.

+1 for the latter.  A recurrent vacuum failure is something that needs
to be dealt with ASAP, not partially-worked-around.

            regards, tom lane

Re: Autovacuum and invalid page header

От
Ireneusz Pluta
Дата:
Tom Lane pisze:
> Alvaro Herrera <alvherre@alvh.no-ip.org> writes:
>
>> Yeah, maybe we should make it put the failed table at the end of the
>> list, for the next run.  This is not simple to implement, if only
>> because autovac workers don't have any way to persist state from one run
>> to the next.  But this kind of thing causes enough problems that it's
>> probably worth it.
>>
>
>
>> One thing to keep in mind, though, is that a persistent error in a
>> single table is enough to keep a database's datfrozenxid from advancing,
>> and thus shut down in case the wraparound horizon comes too close.  So
>> perhaps what we need is more visibility into autovacuum problems.
>>
>
> +1 for the latter.  A recurrent vacuum failure is something that needs
> to be dealt with ASAP, not partially-worked-around.
>
>             regards, tom lane
>
I am revisiting invalid page header cases due to my problem explained in
another recent post from me.
I am not tracing development process too closely. But, as I can see the
9.0 is almost out, let me ask here if this problem has been taken care of?


Re: Autovacuum and invalid page header

От
Ireneusz Pluta
Дата:
Kevin Grittner pisze:
> Ireneusz Pluta <ipluta@wp.pl> wrote:
>
>
>> Is there
>> any ready tool, which, for instance when given a path to database
>> cluster, would traverse all cluster directories and files and
>> check all page headers? I probably answered myself - manual VACUUM
>> [VERBOSE] would do - but it fails when finds the first invalid
>> header, so I would have. That's why I am looking for something
>> only scanning and reporting invalid headers.
>>
>
> I don't know of anything, but you might try searching pgfoundry.
>
Continuing revisiting my old thread, and to answer myself:
find and pg_filedump seem to be the way, more-less like:
find $PGDATA -type f -name [0-9]* |   while read f; do echo $f &&
pg_filedump $f | grep -i invalid; done
# not tested yet
- of course keeping in mind that newly allocated pages might appear
invalid to pg_filedump, while the server is running, as Tom explained
just recently:
> A newly-added page on disk will
> be initially filled with zeroes, which I think pg_filedump will complain
> about.  It won't get overwritten with "valid" data until the page is
> next written, either because of a checkpoint or because the buffer space
> is needed for another page.  pg_filedump can't see the state of the page
> within the server's buffers, which is what counts here.