Обсуждение: Vacuum returns error
PG 7.2 and RH 7.2 Developer dropped ~30+ tables from 4 different DBs on the same server. Queiries began to fail: unable to locate file; /var/lib/pgsql/pg_clog/0006. Immediately VACUUMed DBs. Queiries began returning OK. Minutes later vacuumdb -z DB_1, returned OK. Attempted vacuumdb -z DB_2, failed with error similar to above. Where can I locate more comprehensive error message information? Any recommendations would be greatly appreciated. Thanks - Bill
(* NOTE: I realize that answering your own posts is like talking to yourself, please forgive me... *) On 1 Apr 2002 at 14:29, William Meloney wrote: > PG 7.2 and RH 7.2 > > Developer dropped ~30+ tables from 4 different DBs on the same > server. Queiries began to fail: unable to locate file; > /var/lib/pgsql/pg_clog/0006. > > Immediately VACUUMed DBs. Queiries began returning OK. > > Minutes later vacuumdb -z DB_1, returned OK. > > Attempted vacuumdb -z DB_2, failed with error similar to above. pg_dump DB_2 failed citing a specific table as the point of failure. A review of the dump file ( DB_2.txt.gz) gave the last record number of the table BEFORE it failed. We then executed a DELETE query and removed the offending record in the cited table. An immediate pg_dump completed successfully. Confidence is high that a vacuum will complete as well. > > Where can I locate more comprehensive error message > information? Any recommendations would be greatly appreciated. > > Thanks > > - Bill > > > > ---------------------------(end of broadcast)--------------------------- > TIP 3: if posting/reading through Usenet, please send an appropriate > subscribe-nomail command to majordomo@postgresql.org so that your > message can get through to the mailing list cleanly
"William Meloney" <bmeloney@mindspring.com> writes:
> PG 7.2 and RH 7.2
> Developer dropped ~30+ tables from 4 different DBs on the same
> server. Queiries began to fail: unable to locate file;
> /var/lib/pgsql/pg_clog/0006.
Hmm. What files do you actually have in /var/lib/pgsql/pg_clog?
(I'd like to see an ls -l of that directory if possible; the sizes
and mod dates might be useful clues.) Another useful bit of info
would be "select * from pg_database".
We've seen two or three reports of errors like this, which make me
think there's a corner-case bug or two lurking in the clog code.
Haven't been able to get enough data to track it down yet.
regards, tom lane
On 2 Apr 2002 at 2:09, Tom Lane wrote: > "William Meloney" <bmeloney@mindspring.com> writes: > > PG 7.2 and RH 7.2 > > Developer dropped ~30+ tables from 4 different DBs on the same > > server. Queiries began to fail: unable to locate file; > > /var/lib/pgsql/pg_clog/0006. > > Hmm. What files do you actually have in /var/lib/pgsql/pg_clog? > (I'd like to see an ls -l of that directory if possible; the sizes > and mod dates might be useful clues.) total 252 -rw------- 1 postgres postgres 253952 apr 3 13:47 0000 Another useful bit of info > would be "select * from pg_database". > database datdba encoding datistemplate datallowconn datlassysoid datvacuumxi qc 100 0 false true 16554 964425 plant1 100 0 false true 16554 964106 template1 1 0 true true 16554 1810 template0 1 0 true false 16554 49 plant2 100 0 false true 16554 964326 plant3 100 0 false true 16554 964264 Best of luck - William
"William Meloney" <bmeloney@mindspring.com> writes:
> Developer dropped ~30+ tables from 4 different DBs on the same
> server. Queiries began to fail: unable to locate file;
> /var/lib/pgsql/pg_clog/0006.
>>
>> Hmm. What files do you actually have in /var/lib/pgsql/pg_clog?
> [ info snipped ]
Okay. It seems quite clear that you've executed less than a million
transactions so far in this installation; so 0000 is the only clog
segment that does or should exist, and there's no valid reason
for anything to be trying to access segment 0006. I do not think
this could be a clog logic error as I first suspected; it's probably
more like a garden-variety data corruption problem, ie, something
dropped a bit and there's now an xmin or xmax field someplace that
contains a bogus transaction ID up in the six-million-something range.
My advice is to try to track down and get rid of the bad tuple.
regards, tom lane