Обсуждение: preventing transaction wraparound

Поиск
Список
Период
Сортировка

preventing transaction wraparound

От
Mike Broers
Дата:
Lately I have been paranoid about the possibility of transaction wrap around failure due to a potential orphaned toast table.  I have yet to prove that I have such an object in my database.. but I am running Postgres 8.3 with auto_vacuum enabled and am doing nightly manual vacuums as well and cannot explain the results of this query.  Any assistance is greatly appreciated.

Yesterday I ran:

production=# select datname, age(datfrozenxid) from pg_database;
  datname   |    age    
------------+-----------
 template1  | 100260769
 template0  |  35997820
 postgres   | 100319291
 stage      | 100263734
 production | 100319291

and today after the nightly vacuum ran I got this:

production=# select datname, age(datfrozenxid) from pg_database;
 datname   |    age    
------------+-----------
template1  | 100677381
template0  |  37594611
postgres   | 100738854
stage      | 100680248
production | 100738770

Am I just counting down to 2,000,000,000 and the postgresapocolypse? Is there a way for me to determine what the actual transaction threshold is going to be? I've read the postgresql docs and greg smiths section in high performance and have to admit i am having difficulty understanding how this number is not retreating after a database manual vacuum.

thanks,
Mike


Re: preventing transaction wraparound

От
Chris Browne
Дата:
mbroers@gmail.com (Mike Broers) writes:
> Lately I have been paranoid about the possibility of transaction wrap
> around failure due to a potential orphaned toast table.  I have yet to
> prove that I have such an object in my database.. but I am running
> Postgres 8.3 with auto_vacuum enabled and am doing nightly manual
> vacuums as well and cannot explain the results of this query.  Any
> assistance is greatly appreciated.
>
>
> Yesterday I ran:
>
> production=# select datname, age(datfrozenxid) from pg_database;
>   datname   |    age    
> ------------+-----------
>  template1  | 100260769
>  template0  |  35997820
>  postgres   | 100319291
>  stage      | 100263734
>  production | 100319291
>
> and today after the nightly vacuum ran I got this:
>
> production=# select datname, age(datfrozenxid) from pg_database;
>  datname   |    age    
> ------------+-----------
> template1  | 100677381
> template0  |  37594611
> postgres   | 100738854
> stage      | 100680248
> production | 100738770
>
> Am I just counting down to 2,000,000,000 and the postgresapocolypse?
> Is there a way for me to determine what the actual transaction
> threshold is going to be? I've read the postgresql docs and greg
> smiths section in high performance and have to admit i am having
> difficulty understanding how this number is not retreating after a
> database manual vacuum.

Nothing is likely to be problematic here.

Tuples only get frozen once they're Rather Old.  By default, the
"freeze" won't happen until the age reaches 150 million.  See the value
of GUC vacuum_freeze_table_age.

   http://www.postgresql.org/docs/9.0/static/runtime-config-client.html#GUC-VACUUM-FREEZE-TABLE-AGE

I'd expect to see the age increase towards 150M before anything more
happens.

I suggest you poke into this at a bit more detailed level, and peek at
the states of the tables in one of those databases via:

  SELECT relname, age(relfrozenxid) FROM pg_class WHERE relkind = 'r'
  order by 2;

This will tell you which tables have what going on with their freezing.

You could explicitly run VACUUM FREEZE against one or another of the
databases, which would cause all the affected tables' data to get
frozen, and if you did that against all the tables in (say) the
"postgres" database, you might anticipate seeing the age fall to near 0.
--
let name="cbbrowne" and tld="gmail.com" in String.concat "@" [name;tld];;
http://linuxfinances.info/info/
"You can measure a programmer's perspective by noting his attitude on
the continuing vitality of FORTRAN." -- Alan J. Perlis