Обсуждение: How to check if Postgresql files are OK
Folks, I was using PostgreSQL 8.x in development environment when one day I started getting all kinds of low-level errors while running queries and eventually had to reinstall. Maybe it was salvageable but since it was a test database anyway it didn't matter. We use PostgreSQL 9 on our production server and I was wondering if there there is a way to know when pages get corrupted. I see that there is some kind of checksum maintained from 9.3 but till then is there a way to be notified quickly when such a thing happens? I use a basebackup+rsync of WAL files as a disaster recovery solution. Will this be useful when such a scenario occurs? Thanks.
Nikhil,
* Nikhil G Daddikar (ngd@celoxis.com) wrote:
> We use PostgreSQL 9 on our production server and I was wondering if
> there there is a way to know when pages get corrupted.
It's not great, but there are a few options. First is to use pg_dump
across the entire database and monitor the PG logs to see if it barfs
about anything. Another thing that you can do is to write a script
which pulls out all of the data from each table using an ORDER BY which
matches some index on the table- PG will, generally, use an in-order
index traversal, which will validate the index and the heap, again, to
some extent.
> I see that
> there is some kind of checksum maintained from 9.3 but till then is
> there a way to be notified quickly when such a thing happens? I use
> a basebackup+rsync of WAL files as a disaster recovery solution.
> Will this be useful when such a scenario occurs?
It really depends. Having multiple backups over time will limit the
risk that corruption gets propagated to a slave system. Also, there is
a CRC on the WAL records which are shipped, which helps a bit, but there
are still cases where corruption can get you. The best thing is to have
frequent, tested, backups.
Thanks,
Stephen
Вложения
there is a wiki page aouble corruption detection:
but I think avoid corruption is more important and practical than try to check corruption:
Jov
blog: http:amutu.com/blog
2013/5/28 Nikhil G Daddikar <ngd@celoxis.com>
Folks,
I was using PostgreSQL 8.x in development environment when one day I started getting all kinds of low-level errors while running queries and eventually had to reinstall. Maybe it was salvageable but since it was a test database anyway it didn't matter.
We use PostgreSQL 9 on our production server and I was wondering if there there is a way to know when pages get corrupted. I see that there is some kind of checksum maintained from 9.3 but till then is there a way to be notified quickly when such a thing happens? I use a basebackup+rsync of WAL files as a disaster recovery solution. Will this be useful when such a scenario occurs?
Thanks.
--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general
> It really depends. Having multiple backups over time will limit the > risk that corruption gets propagated to a slave system. Also, there is > a CRC on the WAL records which are shipped, which helps a bit, but there > are still cases where corruption can get you. The best thing is to have > frequent, tested, backups. > How does one validate a backup? Is there any generally practiced way of doing that? Or what do you mean when you say "tested" backups? -- Amit Langote
* Amit Langote (amitlangote09@gmail.com) wrote:
> How does one validate a backup? Is there any generally practiced way
> of doing that? Or what do you mean when you say "tested" backups?
You restore from it and then query the restored database for expected
contents, at least.
Thanks,
Stephen