Re: Somewhat automated method of cleaning table of corrupt records for pg_dump
От | Heiko Wundram |
---|---|
Тема | Re: Somewhat automated method of cleaning table of corrupt records for pg_dump |
Дата | |
Msg-id | 508517E7.2010306@modelnine.org обсуждение исходный текст |
Ответ на | Re: Somewhat automated method of cleaning table of corrupt records for pg_dump (Craig Ringer <ringerc@ringerc.id.au>) |
Ответы |
Re: Somewhat automated method of cleaning table of
corrupt records for pg_dump
|
Список | pgsql-general |
Am 22.10.2012 09:05, schrieb Craig Ringer: > Working strictly with a *copy*, does REINDEXing then CLUSTERing the > tables help? VACCUM FULL on 8.3 won't rebuild indexes, so if index > damage is the culprit a reindex may help. Then, if CLUSTER is able to > rewrite the tables in index order you might be able to recover. REINDEXing all indexes works, CLUSTERing all tables on the primary key (index) after rebuilding them also works, but the dump still displays the same error as before (i.e., out of memory while dumping a specific table). I guess that the broken record(s) in that table are either not moved, or their data portion isn't adapted to the respective index. Which means that besides walking all table rows (and deleting those by OID that contain error[s]), there's not much that I can do, I guess. If there's any other possibility of "out of the box" recovery - except writing myself a small script to walk all rows - I'd still be grateful for a hint. Thanks for the hint at CLUSTER and REINDEX, anyway, which I've never properly used before - this gave me an incentive to dig in the documentation. -- --- Heiko.
В списке pgsql-general по дате отправления: