Обсуждение: open of /usr/lib/pgsql/data/pg_clog/0F3E failed

Поиск
Список
Период
Сортировка

open of /usr/lib/pgsql/data/pg_clog/0F3E failed

От
ruben
Дата:
Hi:

I'm running a large database on PostgreSQL 7.1.3. 20 days ago the
database failed with a threatening and not too descriptive error like:

pg_exec() query failed:  server closed the connection unexpectedlyThis
probably means the server terminated abnormally                 before or while
processing the request.

I lost some data and had to recreate a table, without knowing the reason
for the error, now I got this error:

FATAL 2:  open of /usr/lib/pgsql/data/pg_clog/0F3E failed: No existe el
fichero o el directorio
server closed the connection unexpectedly
This probably means the server terminated abnormally
before or while processing the request.

When I VACUUM the affected table I get:

FATAL 2:  open of /usr/lib/pgsql/data/pg_clog/0F5C failed: No existe el
fichero o el directorio
server closed the connection unexpectedly
         This probably means the server terminated abnormally
         before or while processing the request.
The connection to the server was lost. Attempting reset: NOTICE:
Message from PostgreSQL backend:
The Postmaster has informed me that some other backend
died abnormally and possibly corrupted shared memory.
I have rolled back the current transaction and am
going to terminate your database system connection and exit.
Please reconnect to the database system and repeat your query.
Failed.

I guess I have lost again some data in one table.

Someone can help with the origin of these errors? How can I avoid them?

Thanks in advance.



Re: open of /usr/lib/pgsql/data/pg_clog/0F3E failed

От
Tom Lane
Дата:
ruben <ruben20@superguai.com> writes:
> I lost some data and had to recreate a table, without knowing the reason
> for the error, now I got this error:

> FATAL 2:  open of /usr/lib/pgsql/data/pg_clog/0F3E failed: No existe el
> fichero o el directorio

This looks like one of the common symptoms of corrupted data
(specifically, you have a trashed transaction number in some row header,
which causes the code to try to look up the transaction status in a
portion of clog that doesn't exist yet).

The most likely bet is that you are having intermittent hardware
problems causing data corruption.  I'd suggest testing the hardware as
best you can.  memtest86 and badblocks are a couple of widely-used test
programs for RAM and disk respectively.

As for trying to recover your data, you can look in the mailing list
archives for prior discussions of coping with data corruption.  The
particular rows or pages that are corrupt are probably unrecoverable,
but you can zero them out and at least make the rest of the table
readable.  First you need to stabilize the hardware, though.

            regards, tom lane

Slow after VACUUM, fast after DROP-CREATE INDEX

От
ruben
Дата:
Hi:

Lately I run into some problems with postmaster fatal errors, corrupted
data and indexes, lost tuples, etc. I was advised to check hardware,
I'll do so.

Today, one of the processes running daily took 4 hours when it takes
about 5 minutes. After a VACCUM ANALYZE of the affected tables it took
the same to finish, then I recreated (drop and create) the index of the
affected table and the process when again fast. My question is, isn't
enough to run a VACCUM to optimize a table and its indexes? Is it
advisable to recreate indexes from time to time?

Thanks in advance.



Re: Slow after VACUUM, fast after DROP-CREATE INDEX

От
Harald Fuchs
Дата:
In article <411296B5.6000204@superguai.com>,
ruben <ruben20@superguai.com> writes:

> Today, one of the processes running daily took 4 hours when it takes
> about 5 minutes. After a VACCUM ANALYZE of the affected tables it took
> the same to finish, then I recreated (drop and create) the index of
> the affected table and the process when again fast. My question is,
> isn't enough to run a VACCUM to optimize a table and its indexes? Is
> it advisable to recreate indexes from time to time?

This was necessary in PostgreSQL up to 7.3.x, but 7.4.x is supposed to
fix that.  What version are you running?

Re: Slow after VACUUM, fast after DROP-CREATE INDEX

От
ruben
Дата:
Thanks Harald, i'm running PostgreSQL 7.1.3.



Harald Fuchs wrote:

> In article <411296B5.6000204@superguai.com>,
> ruben <ruben20@superguai.com> writes:
>
>
>>Today, one of the processes running daily took 4 hours when it takes
>>about 5 minutes. After a VACCUM ANALYZE of the affected tables it took
>>the same to finish, then I recreated (drop and create) the index of
>>the affected table and the process when again fast. My question is,
>>isn't enough to run a VACCUM to optimize a table and its indexes? Is
>>it advisable to recreate indexes from time to time?
>
>
> This was necessary in PostgreSQL up to 7.3.x, but 7.4.x is supposed to
> fix that.  What version are you running?
>
>
> ---------------------------(end of broadcast)---------------------------
> TIP 7: don't forget to increase your free space map settings
>



Re: Slow after VACUUM, fast after DROP-CREATE INDEX

От
"Scott Marlowe"
Дата:
Unfortunately, the administrative overhead in 7.1.3 is noticeably higher
than it is in 7.4.  The overhead should be lowered even more in 8.0 with
the integration of the autovacuum daemon into the backend process.

On Fri, 2004-08-06 at 10:24, ruben wrote:
> Thanks Harald, i'm running PostgreSQL 7.1.3.
>
>
>
> Harald Fuchs wrote:
>
> > In article <411296B5.6000204@superguai.com>,
> > ruben <ruben20@superguai.com> writes:
> >
> >
> >>Today, one of the processes running daily took 4 hours when it takes
> >>about 5 minutes. After a VACCUM ANALYZE of the affected tables it took
> >>the same to finish, then I recreated (drop and create) the index of
> >>the affected table and the process when again fast. My question is,
> >>isn't enough to run a VACCUM to optimize a table and its indexes? Is
> >>it advisable to recreate indexes from time to time?
> >
> >
> > This was necessary in PostgreSQL up to 7.3.x, but 7.4.x is supposed to
> > fix that.  What version are you running?
> >
> >
> > ---------------------------(end of broadcast)---------------------------
> > TIP 7: don't forget to increase your free space map settings
> >
>
>
>
> ---------------------------(end of broadcast)---------------------------
> TIP 5: Have you checked our extensive FAQ?
>
>                http://www.postgresql.org/docs/faqs/FAQ.html
>


Re: Slow after VACUUM, fast after DROP-CREATE INDEX

От
Gaetano Mendola
Дата:
Scott Marlowe wrote:
> Unfortunately, the administrative overhead in 7.1.3 is noticeably higher
> than it is in 7.4.  The overhead should be lowered even more in 8.0 with
> the integration of the autovacuum daemon into the backend process.

May be the autovacuum will be integrated in the version 8.1 and not in the
version 8.0. However the pg_autovacuum is not enough, I suggest at least
once a day perform a vacuum full and a reindex for tables eavily updated.


Regards
Gaetano Mendola