Обсуждение: Missing pg_clog files

Поиск
Список
Период
Сортировка

Missing pg_clog files

От
Carol Walter
Дата:
I'm missing about 30 pg_clog files.  How do I recover from this?

Carol

Re: Missing pg_clog files

От
"Tena Sakai"
Дата:

Hi Carol,

I don't have answer...

I have had a similar problem and whatever I
did didn't work.  As I recall, the files in
pg_clog directory started from 0000 and the
number got bigger by 1 (hex) and there was
a gap between the most recent one and the
second recent one.  (Is that how you know
that you are missing about 30 files?)  I
suspect the reason why it didn't work for me
was because the last file was incomplete.  I
wound up upgrading to the latest postgres and
restored from a backup file I made by pg_dumpall.
My platform is linux (do I recall yours is solaris?)
and the postgres version I was running was 8.3.0.

The lesson I learned was to keep an eye on the
pg_clog directory.  Since the installation of
8.3.3 there has been only one file (0000), though
the size keeps growing (I think by 8k bytes).

My two bits.

Tena Sakai
tsakai@gallo.ucsf.edu


-----Original Message-----
From: pgsql-admin-owner@postgresql.org on behalf of Carol Walter
Sent: Tue 9/23/2008 2:31 PM
To: pgsql-admin@postgresql.org
Subject: [ADMIN] Missing pg_clog files

I'm missing about 30 pg_clog files.  How do I recover from this?

Carol

--
Sent via pgsql-admin mailing list (pgsql-admin@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-admin

Re: Missing pg_clog files

От
Carol Walter
Дата:
Hi, Tena,

Yes, you do recall correctly.  It is Solaris 10 and Postgres 8.2.3.

I'm going to run a pg_dumpall and keep my fingers crossed.  I'm getting similar errors when I try to do other things like vacuumdb, so maybe pg_dumpall won't run either.

Carol
On Sep 23, 2008, at 6:13 PM, Tena Sakai wrote:

Hi Carol,

I don't have answer...

I have had a similar problem and whatever I
did didn't work.  As I recall, the files in
pg_clog directory started from 0000 and the
number got bigger by 1 (hex) and there was
a gap between the most recent one and the
second recent one.  (Is that how you know
that you are missing about 30 files?)  I
suspect the reason why it didn't work for me
was because the last file was incomplete.  I
wound up upgrading to the latest postgres and
restored from a backup file I made by pg_dumpall.
My platform is linux (do I recall yours is solaris?)
and the postgres version I was running was 8.3.0.

The lesson I learned was to keep an eye on the
pg_clog directory.  Since the installation of
8.3.3 there has been only one file (0000), though
the size keeps growing (I think by 8k bytes).

My two bits.

Tena Sakai
tsakai@gallo.ucsf.edu


-----Original Message-----
From: pgsql-admin-owner@postgresql.org on behalf of Carol Walter
Sent: Tue 9/23/2008 2:31 PM
To: pgsql-admin@postgresql.org
Subject: [ADMIN] Missing pg_clog files

I'm missing about 30 pg_clog files.  How do I recover from this?

Carol

--
Sent via pgsql-admin mailing list (pgsql-admin@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-admin


Re: Missing pg_clog files

От
"Scott Marlowe"
Дата:
On Tue, Sep 23, 2008 at 4:35 PM, Carol Walter <walterc@indiana.edu> wrote:
> Hi, Tena,
> Yes, you do recall correctly.  It is Solaris 10 and Postgres 8.2.3.
> I'm going to run a pg_dumpall and keep my fingers crossed.  I'm getting
> similar errors when I try to do other things like vacuumdb, so maybe
> pg_dumpall won't run either.

You do know that version has known, fixed in later versions, data
eating bugs, right?

Re: Missing pg_clog files

От
"Tena Sakai"
Дата:

Hi Carol,

Unless it is impossibly difficult, I would try
to move onto the latest (8.3.4) postgres before
the situation worsens.  I had followed Scott's
advice in the past and never regretted.

My two bits, again.

Tena Sakai
tsakai@gallo.ucsf.edu


-----Original Message-----
From: Scott Marlowe [mailto:scott.marlowe@gmail.com]
Sent: Tue 9/23/2008 4:56 PM
To: Carol Walter
Cc: Tena Sakai; pgsql-admin@postgresql.org
Subject: Re: [ADMIN] Missing pg_clog files

On Tue, Sep 23, 2008 at 4:35 PM, Carol Walter <walterc@indiana.edu> wrote:
> Hi, Tena,
> Yes, you do recall correctly.  It is Solaris 10 and Postgres 8.2.3.
> I'm going to run a pg_dumpall and keep my fingers crossed.  I'm getting
> similar errors when I try to do other things like vacuumdb, so maybe
> pg_dumpall won't run either.

You do know that version has known, fixed in later versions, data
eating bugs, right?

Re: Missing pg_clog files

От
"Walter, Carol Williams"
Дата:
Hi, Tena,

I actually downloaded 8.3.3 last week.  The upgrade has been in the
plan.  I didn't realize the latest was 8.3.4. I am concerned though. We
have so many dependencies that they never go smoothly.

Carol

Quoting Tena Sakai <tsakai@gallo.ucsf.edu>:

> Hi Carol,
>
> Unless it is impossibly difficult, I would try
> to move onto the latest (8.3.4) postgres before
> the situation worsens.  I had followed Scott's
> advice in the past and never regretted.
>
> My two bits, again.
>
> Tena Sakai
> tsakai@gallo.ucsf.edu
>
>
> -----Original Message-----
> From: Scott Marlowe [mailto:scott.marlowe@gmail.com]
> Sent: Tue 9/23/2008 4:56 PM
> To: Carol Walter
> Cc: Tena Sakai; pgsql-admin@postgresql.org
> Subject: Re: [ADMIN] Missing pg_clog files
>
> On Tue, Sep 23, 2008 at 4:35 PM, Carol Walter <walterc@indiana.edu> wrote:
>> Hi, Tena,
>> Yes, you do recall correctly.  It is Solaris 10 and Postgres 8.2.3.
>> I'm going to run a pg_dumpall and keep my fingers crossed.  I'm getting
>> similar errors when I try to do other things like vacuumdb, so maybe
>> pg_dumpall won't run either.
>
> You do know that version has known, fixed in later versions, data
> eating bugs, right?
>
>




Re: Missing pg_clog files

От
"Scott Marlowe"
Дата:
On Tue, Sep 23, 2008 at 7:59 PM, Walter,  Carol Williams
<walterc@indiana.edu> wrote:
> Hi, Tena,
>
> I actually downloaded 8.3.3 last week.  The upgrade has been in the plan.  I
> didn't realize the latest was 8.3.4. I am concerned though. We have so many
> dependencies that they never go smoothly.

The real priority is getting the latest bug fix release of 8.2
installed.  that only takes a few minutes and is quite easy, just
update the package for pgsql.

Going to 8.3 requires dump restore and testing your app for
compatibility, something you don't have to do going to 8.2.10 or
whatever version is the latest 8.2 release.

Re: Missing pg_clog files

От
Carol Walter
Дата:
Hello,

Here are the pg_clog files I still have.

-rw-------   1 postgres postgres  262144 Jun  4 18:43 0030
-rw-------   1 postgres postgres  262144 Jun  6 02:06 0031
-rw-------   1 postgres postgres  262144 Jun  7 01:26 0032
-rw-------   1 postgres postgres  262144 Jun  7 21:26 0033
-rw-------   1 postgres postgres  262144 Jun  8 23:16 0034
-rw-------   1 postgres postgres  262144 Jun  9 21:46 0035
-rw-------   1 postgres postgres  262144 Jun 11 06:33 0036
-rw-------   1 postgres postgres  262144 Jun 13 08:33 0037
-rw-------   1 postgres postgres  262144 Jun 16 04:20 0038
-rw-------   1 postgres postgres  262144 Jun 18 11:36 0039
-rw-------   1 postgres postgres  262144 Jun 21 17:06 003A
-rw-------   1 postgres postgres  262144 Jun 24 18:43 003B
-rw-------   1 postgres postgres  262144 Jun 26 05:26 003C
-rw-------   1 postgres postgres  262144 Jun 27 18:40 003D
-rw-------   1 postgres postgres  262144 Jun 28 19:06 003E
-rw-------   1 postgres postgres  262144 Jun 30 09:10 003F
-rw-------   1 postgres postgres  262144 Jul  2 23:40 0040
-rw-------   1 postgres postgres  262144 Jul  4 17:03 0041
-rw-------   1 postgres postgres  262144 Jul  6 03:30 0042
-rw-------   1 postgres postgres  262144 Jul  7 23:06 0043
-rw-------   1 postgres postgres  262144 Jul  9 12:53 0044
-rw-------   1 postgres postgres  262144 Jul 12 19:13 0045
-rw-------   1 postgres postgres  262144 Jul 17 08:06 0046
-rw-------   1 postgres postgres  262144 Jul 21 07:56 0047
-rw-------   1 postgres postgres  262144 Jul 26 22:06 0048
-rw-------   1 postgres postgres  262144 Aug  2 02:10 0049
-rw-------   1 postgres postgres  262144 Aug  5 21:50 004A
-rw-------   1 postgres postgres  262144 Aug  7 11:20 004B
-rw-------   1 postgres postgres  262144 Aug 10 04:58 004C
-rw-------   1 postgres postgres  262144 Aug 11 15:41 004D
-rw-------   1 postgres postgres  262144 Aug 12 22:31 004E
-rw-------   1 postgres postgres  262144 Aug 14 19:31 004F
-rw-------   1 postgres postgres  262144 Aug 23 15:11 0050
-rw-------   1 postgres postgres  262144 Sep  7 16:01 0051
-rw-------   1 postgres postgres  262144 Sep 18 03:41 0052
-rw-------   1 postgres postgres  262144 Sep 23 01:27 0053
-rw-------   1 postgres postgres   40960 Sep 23 23:22 0054

Carol

On Sep 23, 2008, at 11:05 PM, Scott Marlowe wrote:

> On Tue, Sep 23, 2008 at 7:59 PM, Walter,  Carol Williams
> <walterc@indiana.edu> wrote:
>> Hi, Tena,
>>
>> I actually downloaded 8.3.3 last week.  The upgrade has been in
>> the plan.  I
>> didn't realize the latest was 8.3.4. I am concerned though. We
>> have so many
>> dependencies that they never go smoothly.
>
> The real priority is getting the latest bug fix release of 8.2
> installed.  that only takes a few minutes and is quite easy, just
> update the package for pgsql.
>
> Going to 8.3 requires dump restore and testing your app for
> compatibility, something you don't have to do going to 8.2.10 or
> whatever version is the latest 8.2 release.


Re: Missing pg_clog files

От
"Tena Sakai"
Дата:

Hi Carol,

I detect in you some apprehension as to pg_dumpall
won't run or complete.  Why is that?  Have you already
done it and it didn't work?  If that's not the case,
why not run pg_dumpall at a quiet hour and see?

I think Scott is right as to install the latest
8.2 on top.  It won't be time consuming task.
Why not give it a wheel?  It would be good to
find out one way or the other.


Scott: Are files 0000 through 002F (which are
not there) absolutely necessary for recovering data?

Regards,

Tena Sakai
tsakai@gallo.ucsf.edu



-----Original Message-----
From: pgsql-admin-owner@postgresql.org on behalf of Carol Walter
Sent: Tue 9/23/2008 8:35 PM
To: Scott Marlowe
Cc: pgsql-admin@postgresql.org
Subject: Re: [ADMIN] Missing pg_clog files

Hello,

Here are the pg_clog files I still have.

-rw-------   1 postgres postgres  262144 Jun  4 18:43 0030
-rw-------   1 postgres postgres  262144 Jun  6 02:06 0031
-rw-------   1 postgres postgres  262144 Jun  7 01:26 0032
-rw-------   1 postgres postgres  262144 Jun  7 21:26 0033
-rw-------   1 postgres postgres  262144 Jun  8 23:16 0034
-rw-------   1 postgres postgres  262144 Jun  9 21:46 0035
-rw-------   1 postgres postgres  262144 Jun 11 06:33 0036
-rw-------   1 postgres postgres  262144 Jun 13 08:33 0037
-rw-------   1 postgres postgres  262144 Jun 16 04:20 0038
-rw-------   1 postgres postgres  262144 Jun 18 11:36 0039
-rw-------   1 postgres postgres  262144 Jun 21 17:06 003A
-rw-------   1 postgres postgres  262144 Jun 24 18:43 003B
-rw-------   1 postgres postgres  262144 Jun 26 05:26 003C
-rw-------   1 postgres postgres  262144 Jun 27 18:40 003D
-rw-------   1 postgres postgres  262144 Jun 28 19:06 003E
-rw-------   1 postgres postgres  262144 Jun 30 09:10 003F
-rw-------   1 postgres postgres  262144 Jul  2 23:40 0040
-rw-------   1 postgres postgres  262144 Jul  4 17:03 0041
-rw-------   1 postgres postgres  262144 Jul  6 03:30 0042
-rw-------   1 postgres postgres  262144 Jul  7 23:06 0043
-rw-------   1 postgres postgres  262144 Jul  9 12:53 0044
-rw-------   1 postgres postgres  262144 Jul 12 19:13 0045
-rw-------   1 postgres postgres  262144 Jul 17 08:06 0046
-rw-------   1 postgres postgres  262144 Jul 21 07:56 0047
-rw-------   1 postgres postgres  262144 Jul 26 22:06 0048
-rw-------   1 postgres postgres  262144 Aug  2 02:10 0049
-rw-------   1 postgres postgres  262144 Aug  5 21:50 004A
-rw-------   1 postgres postgres  262144 Aug  7 11:20 004B
-rw-------   1 postgres postgres  262144 Aug 10 04:58 004C
-rw-------   1 postgres postgres  262144 Aug 11 15:41 004D
-rw-------   1 postgres postgres  262144 Aug 12 22:31 004E
-rw-------   1 postgres postgres  262144 Aug 14 19:31 004F
-rw-------   1 postgres postgres  262144 Aug 23 15:11 0050
-rw-------   1 postgres postgres  262144 Sep  7 16:01 0051
-rw-------   1 postgres postgres  262144 Sep 18 03:41 0052
-rw-------   1 postgres postgres  262144 Sep 23 01:27 0053
-rw-------   1 postgres postgres   40960 Sep 23 23:22 0054

Carol

On Sep 23, 2008, at 11:05 PM, Scott Marlowe wrote:

> On Tue, Sep 23, 2008 at 7:59 PM, Walter,  Carol Williams
> <walterc@indiana.edu> wrote:
>> Hi, Tena,
>>
>> I actually downloaded 8.3.3 last week.  The upgrade has been in 
>> the plan.  I
>> didn't realize the latest was 8.3.4. I am concerned though. We 
>> have so many
>> dependencies that they never go smoothly.
>
> The real priority is getting the latest bug fix release of 8.2
> installed.  that only takes a few minutes and is quite easy, just
> update the package for pgsql.
>
> Going to 8.3 requires dump restore and testing your app for
> compatibility, something you don't have to do going to 8.2.10 or
> whatever version is the latest 8.2 release.


--
Sent via pgsql-admin mailing list (pgsql-admin@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-admin

Re: Missing pg_clog files

От
"Scott Marlowe"
Дата:
On Tue, Sep 23, 2008 at 10:16 PM, Tena Sakai <tsakai@gallo.ucsf.edu> wrote:
> Hi Carol,
>
> I detect in you some apprehension as to pg_dumpall
> won't run or complete.  Why is that?  Have you already
> done it and it didn't work?  If that's not the case,
> why not run pg_dumpall at a quiet hour and see?
>
> I think Scott is right as to install the latest
> 8.2 on top.  It won't be time consuming task.
> Why not give it a wheel?  It would be good to
> find out one way or the other.
>
>
> Scott: Are files 0000 through 002F (which are
> not there) absolutely necessary for recovering data?

Most likely not.  If the db won't start up without them, it might be
possible to create new clog files that are nothing but zeroes.  Never
been in this position though...

Re: Missing pg_clog files

От
"Tena Sakai"
Дата:

Hi,

As a clueless, optimistic traveller, I'd interpret
what Scott's saying as, "maybe slightly less to
worry about."

Regards,

Tena Sakai
tsakai@gallo.ucsf.edu

-----Original Message-----
From: Scott Marlowe [mailto:scott.marlowe@gmail.com]
Sent: Tue 9/23/2008 9:21 PM
To: Tena Sakai
Cc: Carol Walter; pgsql-admin@postgresql.org
Subject: Re: [ADMIN] Missing pg_clog files

On Tue, Sep 23, 2008 at 10:16 PM, Tena Sakai <tsakai@gallo.ucsf.edu> wrote:
> Hi Carol,
>
> I detect in you some apprehension as to pg_dumpall
> won't run or complete.  Why is that?  Have you already
> done it and it didn't work?  If that's not the case,
> why not run pg_dumpall at a quiet hour and see?
>
> I think Scott is right as to install the latest
> 8.2 on top.  It won't be time consuming task.
> Why not give it a wheel?  It would be good to
> find out one way or the other.
>
>
> Scott: Are files 0000 through 002F (which are
> not there) absolutely necessary for recovering data?

Most likely not.  If the db won't start up without them, it might be
possible to create new clog files that are nothing but zeroes.  Never
been in this position though...

Re: Missing pg_clog files

От
"Walter, Carol Williams"
Дата:
Hello,

I will try this.  Is there a particular size these files need to be?
Then I can try the pg_dumpall.  That's, of course, my main concern.  I
need to get the data out.

By the way, before I went to panic mode, I tried pg_dumpall and
reindexing the database.  Everything gets the same error, perhaps with
a different file name.

Carol


Quoting Scott Marlowe <scott.marlowe@gmail.com>:

> On Tue, Sep 23, 2008 at 10:16 PM, Tena Sakai <tsakai@gallo.ucsf.edu> wrote:
>> Hi Carol,
>>
>> I detect in you some apprehension as to pg_dumpall
>> won't run or complete.  Why is that?  Have you already
>> done it and it didn't work?  If that's not the case,
>> why not run pg_dumpall at a quiet hour and see?
>>
>> I think Scott is right as to install the latest
>> 8.2 on top.  It won't be time consuming task.
>> Why not give it a wheel?  It would be good to
>> find out one way or the other.
>>
>>
>> Scott: Are files 0000 through 002F (which are
>> not there) absolutely necessary for recovering data?
>
> Most likely not.  If the db won't start up without them, it might be
> possible to create new clog files that are nothing but zeroes.  Never
> been in this position though...
>
> --
> Sent via pgsql-admin mailing list (pgsql-admin@postgresql.org)
> To make changes to your subscription:
> http://www.postgresql.org/mailpref/pgsql-admin
>




Re: Missing pg_clog files

От
Carol Walter
Дата:
Hello,

I tried creating the files 0000 through 002F.  Pg_dump still will not
run.  The error was as follows:

-bash-3.00$ pg_dump -U postgres ebiz > ebiz_bk.sql
pg_dump: SQL command failed
pg_dump: Error message from server: ERROR:  could not access status
of transaction 20080015
DETAIL:  Could not read from file "pg_clog/0013" at offset 32768:
Error 0.
pg_dump: The command was: COPY ebizd.products_categories
(category_id, product_id) TO stdout;

Carol

On Sep 24, 2008, at 7:13 AM, Walter, Carol Williams wrote:

> Hello,
>
> I will try this.  Is there a particular size these files need to
> be?  Then I can try the pg_dumpall.  That's, of course, my main
> concern.  I need to get the data out.
>
> By the way, before I went to panic mode, I tried pg_dumpall and
> reindexing the database.  Everything gets the same error, perhaps
> with a different file name.
>
> Carol
>
>
> Quoting Scott Marlowe <scott.marlowe@gmail.com>:
>
>> On Tue, Sep 23, 2008 at 10:16 PM, Tena Sakai
>> <tsakai@gallo.ucsf.edu> wrote:
>>> Hi Carol,
>>>
>>> I detect in you some apprehension as to pg_dumpall
>>> won't run or complete.  Why is that?  Have you already
>>> done it and it didn't work?  If that's not the case,
>>> why not run pg_dumpall at a quiet hour and see?
>>>
>>> I think Scott is right as to install the latest
>>> 8.2 on top.  It won't be time consuming task.
>>> Why not give it a wheel?  It would be good to
>>> find out one way or the other.
>>>
>>>
>>> Scott: Are files 0000 through 002F (which are
>>> not there) absolutely necessary for recovering data?
>>
>> Most likely not.  If the db won't start up without them, it might be
>> possible to create new clog files that are nothing but zeroes.  Never
>> been in this position though...
>>
>> --
>> Sent via pgsql-admin mailing list (pgsql-admin@postgresql.org)
>> To make changes to your subscription:
>> http://www.postgresql.org/mailpref/pgsql-admin
>>
>
>
>
>
> --
> Sent via pgsql-admin mailing list (pgsql-admin@postgresql.org)
> To make changes to your subscription:
> http://www.postgresql.org/mailpref/pgsql-admin


Re: Missing pg_clog files

От
Tom Lane
Дата:
Carol Walter <walterc@indiana.edu> writes:
> I tried creating the files 0000 through 002F.  Pg_dump still will not
> run.  The error was as follows:

> -bash-3.00$ pg_dump -U postgres ebiz > ebiz_bk.sql
> pg_dump: SQL command failed
> pg_dump: Error message from server: ERROR:  could not access status
> of transaction 20080015
> DETAIL:  Could not read from file "pg_clog/0013" at offset 32768:
> Error 0.
> pg_dump: The command was: COPY ebizd.products_categories
> (category_id, product_id) TO stdout;

You need to make the files the right size (256K of zeroes).
A suitable "dd" from /dev/zero will accomplish this on modern
Unixen (ie, anything that has /dev/zero).

Note that this is by no means a fix, it simply allows pg_dump to
complete.  What you are really doing by filling those files with
zeroes is saying "assume all these old transactions aborted".
You *will* have data loss.  It will only affect rows that haven't
been accessed in a very long time (since at least June, looks like)
but gone is gone.

Another possibility that might be better is to fill the files with
0x55, though this is harder since /dev/zero won't help.  That would
force all the old transactions to be considered committed rather than
aborted.  This isn't really better from a consistency standpoint, but
if you feel that most of your data-altering commands succeed then
this might give you a closer approximation to the state you want.

The whole thing is pretty troubling because 8.2.x is supposed to
contain defenses against this type of problem.  Could we see
the contents of "select datname, datfrozenxid from pg_database"?
Also, have you tried dumping individual databases instead of
pg_dumpall?  (It would be good to experiment with that before
you start making bogus pg_clog files; once you do that there's
no going back in terms of recovering the true state of your data.)

            regards, tom lane

PS: Since you've evidently got a lot of rows that haven't been
accessed in months, I conclude that you have not been running
routine backups.  Tut tut.  I trust you'll remedy that oversight
as soon as you get out of the immediate problem.

Re: Missing pg_clog files

От
Carol Walter
Дата:
Yes, as an experienced dba, there ist no excuse for not having the
routine backups.  I do it for the other instances but not this one.
It's student contains student databases.  The students are required
to do there own backups.  The problem is, the database got used for
some non-student data.  I should have anticipated that since the
resource is there someone will use it.  =)   There are three small
databases that I need.

I'm going to create the files again with "0x55".  I need 256K hex
characters?

I did try to dump individual databases.  Since I need these three
that was the first thing I did.

If I upgrade to the highest version of 8.2, do I still have to do the
dump and restore?  That may be a dumb question, but I've seen some
emails that seem to say that I don't.

Here is the results of the query you asked for.

      datname      | datfrozenxid
------------------+--------------
  postgres         |          524
  aapike           |          524
  acmweb           |          524
  aeprice          |          524
  sgadkari         |          524
  template0        |          524
  ahanders         |          524
  ajkurtz          |          524
  akeebaug         |          524
  aloehrle         |          524
  amfalcon         |          524
  amihnen          |          524
  amlmonro         |          524
  andmcilw         |          524
  arhoda           |          524
  arsteven         |          524
  asist            |          524
  askschoo         |          524
  atawfik          |          524
  awead            |          524
  barb             |          524
  benpeck          |          524
  bepnelso         |          524
  berry3           |          524
  bflesher         |          524
  biblio           |          524
  bjc2             |          524
  blbeals          |          524
  blfay            |          524
  brichwin         |          524
  brog             |          524
  burtont          |          524
  cartermt         |          524
  cdwhitlo         |          524
  cgoodbee         |          524
  chbishop         |          524
  clschwie         |          524
  cmdablog         |          524
  cmfriend         |          524
  cwestbro         |          524
  daltenho         |          524
  datnguye         |          524
  davisjs          |          524
  dlafemin         |          524
  dlgriggs         |          524
  dotsonm          |          524
  dpierz           |          524
  dsa              |          524
  dtdo             |          524
  wke              |          524
  l548s07c         |          524
  jm               |          524
  dbicknel         |          524
  dwray            |          524
  eaodonne         |          524
  jeejacks         |          524
  edewert          |          524
  eeich            |          524
  efhardy          |          524
  ellwrigh         |          524
  emerya           |          524
  emlcoope         |          524
  emudave          |          524
  eschramm         |          524
  bkjacob          |          524
  jkulneva         |          524
  kuepeter         |          524
  ys3              |          524
  cepynes          |          524
  flashb           |          524
  fullera          |          524
  gabwong          |          524
  hbusch           |          524
  hcapocci         |          524
  hiteaw           |          524
  hjtolber         |          524
  ingschne         |          524
  iplanton         |          524
  jajcdb           |          524
  jfieber          |          524
  jiwan            |          524
  jku              |          524
  josreyes         |          524
  jowarren         |          524
  jplong           |          524
  jschuenz         |          524
  jtweedy          |          524
  kacates          |          524
  karpaden         |          524
  kbivcsi          |          524
  kcentann         |          524
  kcfreder         |          524
  kcostin          |          524
  hrosenba         |          524
  stjmarsh         |          524
  rvarick          |          524
  prasadm          |          524
  kdlib            |          524
  khenrich         |          524
  kiyang           |          524
  kmane            |          524
  kmauer           |          524
  knbayles         |          524
  knoubani         |          524
  kseki            |          524
  l546f06a         |          524
  l548s06a         |          524
  lair_medinfer    |          524
  lbikoff          |          524
  lee55            |          524
  leemchri         |          524
  jacksonj         |          524
  ageorges         |          524
  austroud         |          524
  bmoriari         |          524
  broos            |          524
  ceich            |          524
  edawidow         |          524
  ljlangnet        |          524
  ljohnsto         |          524
  lkaiser2         |          524
  lkhooper         |          524
  lmolefi          |          524
  ltian            |          524
  lucas_dictionary |          524
  lucas_genedb     |          524
  lucas_proteindb  |          524
  macci            |          524
  magpeter         |          524
  epoirier         |          524
  hnethert         |          524
  jgaley           |          524
  jtwelty          |          524
  jwalrath         |          524
  mamablogs        |          524
  mapfinder        |          524
  markane          |          524
  mcglass          |          524
  meho             |          524
  mfr              |          524
  mmsommer         |          524
  mnapier          |          524
  moore35          |          524
  morrisjm         |          524
  mosse            |          524
  msohl            |          524
  mtl554           |          524
  nachase          |          524
  ngarrett         |          524
  nirobins         |          524
  nlgeorge         |          524
  nsfitwf          |          524
  jwoomer          |          524
  kekbia           |          524
  koulikom         |          524
  ksd              |          524
  lsisler          |          524
  mwourms          |          524
  nucleus          |          524
  omthomas         |          524
  naalsham         |          524
  nansuwan         |          524
  nfcapps          |          524
  nwahrman         |          524
  oescue           |          524
  plpierso         |          524
  ppatil           |          524
  psbright         |          524
  oncosifter       |          524
  otdelong         |          524
  paolillo         |          524
  penwang          |          524
  perezh           |          524
  phppgadmin       |          524
  places           |          524
  pldillon         |          524
  prodes           |          524
  pwelsch          |          524
  qadrupal         |          524
  rduhon           |          524
  rdwillis         |          524
  repotter         |          524
  rgao             |          524
  rkcsi            |          524
  rklusman         |          524
  rmukkama         |          524
  rosea            |          524
  rosenbsj         |          524
  rpherwan         |          524
  rtolnay          |          524
  sagoodwi         |          524
  sakram           |          524
  sambre           |          524
  scott6           |          524
  sestumpf         |          524
  sghurd           |          524
  shawd            |          524
  sjt              |          524
  sjunk            |          524
  skashwan         |          524
  skonkiel         |          524
  slisprot         |          524
  slsingle         |          524
  slspangl         |          524
  smercure         |          524
  sp23             |          524
  spencers         |          524
  sprao            |          524
  spraocal         |          524
  spraoit          |          524
  stritt           |          524
  switzers         |          524
  tbjacobs         |          524
  rbrubach         |          524
  saaalshe         |          524
  template1        |          524
  tigan            |          524
  tlcamero         |          524
  tlennis          |          524
  tlmiles          |          524
  tneirync         |          524
  trec             |          524
  tvdwyer          |          524
  upriss           |          524
  l548s07b         |          524
  videob           |          524
  vkluehrs         |          524
  wemigh           |          524
  wsams            |          524
  xyao             |          524
  yasun            |          524
  yufu             |          524
  yuwang2          |          524
  yz12             |          524
  rdurrer          |          524
  rbain            |          524
  jgottwig         |          524
  gallantm         |          524
  ajwei            |          524
  rpvander         |          524
  l548s07a         |          524
  sbluemle         |          524
  sstrahl          |          524
  stevecox         |          524
  vcsingh          |          524
  huangb           |          524
  mpraskav         |          524
  lvanleer         |          524
  mmillard         |          524
  linshedd         |          524
  mgunkel          |          524
  aeathava         |          524
  rbiars           |          524
  krblackw         |          524
  boltonb          |          524
  jcornn           |          524
  cdethlof         |          524
  reells           |          524
  lorhardi         |          524
  thommey          |          524
  ckhull           |          524
  bjules           |          524
  lklake           |          524
  rootk            |          524
  whmcmill         |          524
  eoverhau         |          524
  mrome            |          524
  as37             |          524
  krlthoma         |          524
  jltyner          |          524
  mavest           |          524
  lcwelhan         |          524
  awismer          |          524
  confluence       |          524
  jawalsh          |          524
  hshewale         |          524
  polavara         |          524
  s517f07a         |          524
  ebiz             |          524
  lalfi            |          524
  vcob             |          524
  s602s07f         |          524
  yangfund         |          524
  tdbowman         |          524
  ofabilol         |          524
  s517s08a         |          524
  slis_assets      |          524
  clhoneyc         |          524
  bzflag           |          524
  caroltest        |          524
  citesrch         |          524
  vgangal          |          524
  skhowaji         |          524
  ofeda            |          524
  jatterbu         |          524
  s517s08b         |          524
  emakki           |          524
  test             |          524
  dingying         |          524
  walterc          |          524
  msinghi          |          524
(301 rows)

Thank you for all your help.

Carol

On Sep 24, 2008, at 9:10 AM, Tom Lane wrote:

> Carol Walter <walterc@indiana.edu> writes:
>> I tried creating the files 0000 through 002F.  Pg_dump still will not
>> run.  The error was as follows:
>
>> -bash-3.00$ pg_dump -U postgres ebiz > ebiz_bk.sql
>> pg_dump: SQL command failed
>> pg_dump: Error message from server: ERROR:  could not access status
>> of transaction 20080015
>> DETAIL:  Could not read from file "pg_clog/0013" at offset 32768:
>> Error 0.
>> pg_dump: The command was: COPY ebizd.products_categories
>> (category_id, product_id) TO stdout;
>
> You need to make the files the right size (256K of zeroes).
> A suitable "dd" from /dev/zero will accomplish this on modern
> Unixen (ie, anything that has /dev/zero).
>
> Note that this is by no means a fix, it simply allows pg_dump to
> complete.  What you are really doing by filling those files with
> zeroes is saying "assume all these old transactions aborted".
> You *will* have data loss.  It will only affect rows that haven't
> been accessed in a very long time (since at least June, looks like)
> but gone is gone.
>
> Another possibility that might be better is to fill the files with
> 0x55, though this is harder since /dev/zero won't help.  That would
> force all the old transactions to be considered committed rather than
> aborted.  This isn't really better from a consistency standpoint, but
> if you feel that most of your data-altering commands succeed then
> this might give you a closer approximation to the state you want.
>
> The whole thing is pretty troubling because 8.2.x is supposed to
> contain defenses against this type of problem.  Could we see
> the contents of "select datname, datfrozenxid from pg_database"?
> Also, have you tried dumping individual databases instead of
> pg_dumpall?  (It would be good to experiment with that before
> you start making bogus pg_clog files; once you do that there's
> no going back in terms of recovering the true state of your data.)
>
>             regards, tom lane
>
> PS: Since you've evidently got a lot of rows that haven't been
> accessed in months, I conclude that you have not been running
> routine backups.  Tut tut.  I trust you'll remedy that oversight
> as soon as you get out of the immediate problem.


Re: Missing pg_clog files

От
Carol Walter
Дата:
Hi, Tom,

Are the files that contain the hex characters supposed to contain a
single string and no control characters?  I used Excel to create the
files, but I'll have to edit them when I get them to the Solaris box
to take out any bad characters.

I'm also wondering if, after I create the dummy files, and pg_dump
works, I could restore an old pg_dumpall file and then insert any
data that aren't there from the pg_dumps.

Carol

On Sep 24, 2008, at 9:10 AM, Tom Lane wrote:

> Carol Walter <walterc@indiana.edu> writes:
>> I tried creating the files 0000 through 002F.  Pg_dump still will not
>> run.  The error was as follows:
>
>> -bash-3.00$ pg_dump -U postgres ebiz > ebiz_bk.sql
>> pg_dump: SQL command failed
>> pg_dump: Error message from server: ERROR:  could not access status
>> of transaction 20080015
>> DETAIL:  Could not read from file "pg_clog/0013" at offset 32768:
>> Error 0.
>> pg_dump: The command was: COPY ebizd.products_categories
>> (category_id, product_id) TO stdout;
>
> You need to make the files the right size (256K of zeroes).
> A suitable "dd" from /dev/zero will accomplish this on modern
> Unixen (ie, anything that has /dev/zero).
>
> Note that this is by no means a fix, it simply allows pg_dump to
> complete.  What you are really doing by filling those files with
> zeroes is saying "assume all these old transactions aborted".
> You *will* have data loss.  It will only affect rows that haven't
> been accessed in a very long time (since at least June, looks like)
> but gone is gone.
>
> Another possibility that might be better is to fill the files with
> 0x55, though this is harder since /dev/zero won't help.  That would
> force all the old transactions to be considered committed rather than
> aborted.  This isn't really better from a consistency standpoint, but
> if you feel that most of your data-altering commands succeed then
> this might give you a closer approximation to the state you want.
>
> The whole thing is pretty troubling because 8.2.x is supposed to
> contain defenses against this type of problem.  Could we see
> the contents of "select datname, datfrozenxid from pg_database"?
> Also, have you tried dumping individual databases instead of
> pg_dumpall?  (It would be good to experiment with that before
> you start making bogus pg_clog files; once you do that there's
> no going back in terms of recovering the true state of your data.)
>
>             regards, tom lane
>
> PS: Since you've evidently got a lot of rows that haven't been
> accessed in months, I conclude that you have not been running
> routine backups.  Tut tut.  I trust you'll remedy that oversight
> as soon as you get out of the immediate problem.
>
> --
> Sent via pgsql-admin mailing list (pgsql-admin@postgresql.org)
> To make changes to your subscription:
> http://www.postgresql.org/mailpref/pgsql-admin


Re: Missing pg_clog files

От
Tom Lane
Дата:
Carol Walter <walterc@indiana.edu> writes:
> Are the files that contain the hex characters supposed to contain a
> single string and no control characters?

Yes, you want 256K occurrences of the byte value 0x55 and nothing else.

> I'm also wondering if, after I create the dummy files, and pg_dump
> works, I could restore an old pg_dumpall file and then insert any
> data that aren't there from the pg_dumps.

It'd be a good idea to do as much cross-checking as you can, since
it's highly probable that the dumped data will be at least partly wrong.

No, I'm afraid updating to 8.2.latest won't get you out of this.
It might possibly prevent a recurrence.

            regards, tom lane

Re: Missing pg_clog files

От
Tom Lane
Дата:
Carol Walter <walterc@indiana.edu> writes:
> Here is the results of the query you asked for.

>       datname      | datfrozenxid
> ------------------+--------------
>   postgres         |          524
>   ...

Huh.  Postgres should never have removed any of those files with
datfrozenxid this low.  So it seems we have two possible explanations:
* there's something subtly wrong with the TruncateCLOG logic
* the files were removed by something else, ie, human error or
  operating-system failure

            regards, tom lane

Re: Missing pg_clog files

От
Carol Walter
Дата:
To use the hex value 0x55, do I need to enclose it in single quotes?

Carol

On Sep 24, 2008, at 11:44 AM, Tom Lane wrote:

> Carol Walter <walterc@indiana.edu> writes:
>> Are the files that contain the hex characters supposed to contain a
>> single string and no control characters?
>
> Yes, you want 256K occurrences of the byte value 0x55 and nothing
> else.
>
>> I'm also wondering if, after I create the dummy files, and pg_dump
>> works, I could restore an old pg_dumpall file and then insert any
>> data that aren't there from the pg_dumps.
>
> It'd be a good idea to do as much cross-checking as you can, since
> it's highly probable that the dumped data will be at least partly
> wrong.
>
> No, I'm afraid updating to 8.2.latest won't get you out of this.
> It might possibly prevent a recurrence.
>
>             regards, tom lane
>
> --
> Sent via pgsql-admin mailing list (pgsql-admin@postgresql.org)
> To make changes to your subscription:
> http://www.postgresql.org/mailpref/pgsql-admin


Re: Missing pg_clog files

От
"Scott Marlowe"
Дата:
No, the file needs to be nothing but hex value 55 throughout.  The
attachment I sent earlier is just that:

smarlowe@sawyer:~$ hexdump 55
0000000 5555 5555 5555 5555 5555 5555 5555 5555
*
0040000

i.e. all 55s, all the time, not the ascii numbers 5 and 5, but the hex
value.  The actual file is ascii the letter U over and over:

smarlowe@sawyer:~$ head -c 100 55
UUUUUUUUUUUUUUUUUUUUUUUUUUUUUUUUUUUUUUUUUUUUUUUUUU
UUUUUUUUUUUUUUUUUUUUUUUUUUUUUUUUUUUUUUUUUUUUUUUUUU

and so on.

On Wed, Sep 24, 2008 at 12:09 PM, Carol Walter <walterc@indiana.edu> wrote:
> To use the hex value 0x55, do I need to enclose it in single quotes?
>
> Carol
>
> On Sep 24, 2008, at 11:44 AM, Tom Lane wrote:
>
>> Carol Walter <walterc@indiana.edu> writes:
>>>
>>> Are the files that contain the hex characters supposed to contain a
>>> single string and no control characters?
>>
>> Yes, you want 256K occurrences of the byte value 0x55 and nothing else.
>>
>>> I'm also wondering if, after I create the dummy files, and pg_dump
>>> works, I could restore an old pg_dumpall file and then insert any
>>> data that aren't there from the pg_dumps.
>>
>> It'd be a good idea to do as much cross-checking as you can, since
>> it's highly probable that the dumped data will be at least partly wrong.
>>
>> No, I'm afraid updating to 8.2.latest won't get you out of this.
>> It might possibly prevent a recurrence.
>>
>>                        regards, tom lane
>>
>> --
>> Sent via pgsql-admin mailing list (pgsql-admin@postgresql.org)
>> To make changes to your subscription:
>> http://www.postgresql.org/mailpref/pgsql-admin
>
>