Hi -
that's my understanding of pg_log, too. But what does the trick to
re-generate the indexes to make the tuples valid? If I do a select on one
of the larger tables (>100.000 tuples), all data is being loaded since it
take the "usual" moment of accessing the corresponding file. But then, the
tuples are not valid in some way, you see.
I only remember like a "flash", that the pg_log-file was large. We have no
blobs inside the tables. Would'nt it be possible to "scan" the raw
table-files an to re-engineer the data? This seems easier to me than
fiddling with transaction-logs. The pity is, that the transaction-log also
affects the system's tables....
Whatever is possible, time is no problem. Getting the data back is what
has to be done....
Ralf
PS: "COMMIT *" or what? Bitter joke... or "ROLLBACK 'rm'".
On Sun, 1 Oct 2000, Peter Eisentraut wrote:
> Bruce Momjian writes:
>
> > Can the user rename the /data directory, do initdb, save the pg_log
> > file, move the old /data back into place, add the new pg_log, and do a
> > backup of his data?
>
> My understanding is that pg_log contains flags about which transactions
> have committed, from which is inferred what tuples are valid. So
> theoretically you could set "all transactions have committed" in pg_log
> and pick out the tuples you like from the tables (after having gotten past
> the horribly corrupted indexes). But that seems like a pretty complicated
> undertaking in any case.
>
>
> --
> Peter Eisentraut peter_e@gmx.net http://yi.org/peter-e/
>