Обсуждение: transactoin id wraparound problem
Hi
I get error messages on the console that says
.
WARNING: database "xxx" must be vacuumed within 10094646 transactions
I shutdown, restart pg and issue a vacuumdb –f <database>
I still get messages saying database must be vacuumed and the transaction count keeps decreasing. Is vacuumdb –f not working for some reason?
On Tue, Sep 05, 2006 at 10:45:40PM -0700, Sriram Dandapani wrote: > WARNING: database "xxx" must be vacuumed within 10094646 transactions > > > > I shutdown, restart pg and issue a vacuumdb -f <database> -f does _not_ mean "vacuum all databases". It means "do VACUUM FULL". These aren't the same thing. My bet is what you need is vacuumdb -a You've probably missed a database you don't know is there. A -- Andrew Sullivan | ajs@crankycanuck.ca Everything that happens in the world happens at some place. --Jane Jacobs
"Sriram Dandapani" <sdandapani@counterpane.com> writes: > I get error messages on the console that says > WARNING: database "xxx" must be vacuumed within 10094646 transactions > I shutdown, restart pg and issue a vacuumdb -f <database> The shutdown/restart was a waste of typing, and -f doesn't really help here either. > I still get messages saying database must be vacuumed and the > transaction count keeps decreasing. I'll bet a nickel you did not run the vacuum as superuser. It has to be done by a superuser to have permission to vacuum every table in the database (including system catalogs). regards, tom lane
Do you mean that I login as say root and issue a vacuumdb (or do I login as a postgres user with special privileges) -----Original Message----- From: Tom Lane [mailto:tgl@sss.pgh.pa.us] Sent: Wednesday, September 06, 2006 6:41 AM To: Sriram Dandapani Cc: pgsql-admin@postgresql.org Subject: Re: [ADMIN] transactoin id wraparound problem "Sriram Dandapani" <sdandapani@counterpane.com> writes: > I get error messages on the console that says > WARNING: database "xxx" must be vacuumed within 10094646 transactions > I shutdown, restart pg and issue a vacuumdb -f <database> The shutdown/restart was a waste of typing, and -f doesn't really help here either. > I still get messages saying database must be vacuumed and the > transaction count keeps decreasing. I'll bet a nickel you did not run the vacuum as superuser. It has to be done by a superuser to have permission to vacuum every table in the database (including system catalogs). regards, tom lane
Is there a quick way(other than vacuum full) to re-init the transaction ids. (I can afford some downtime) -----Original Message----- From: Tom Lane [mailto:tgl@sss.pgh.pa.us] Sent: Wednesday, September 06, 2006 6:41 AM To: Sriram Dandapani Cc: pgsql-admin@postgresql.org Subject: Re: [ADMIN] transactoin id wraparound problem "Sriram Dandapani" <sdandapani@counterpane.com> writes: > I get error messages on the console that says > WARNING: database "xxx" must be vacuumed within 10094646 transactions > I shutdown, restart pg and issue a vacuumdb -f <database> The shutdown/restart was a waste of typing, and -f doesn't really help here either. > I still get messages saying database must be vacuumed and the > transaction count keeps decreasing. I'll bet a nickel you did not run the vacuum as superuser. It has to be done by a superuser to have permission to vacuum every table in the database (including system catalogs). regards, tom lane
A PostgreSQL superuser, so yes user postgres will work just fine. Sriram Dandapani wrote: > Do you mean that I login as say root and issue a vacuumdb (or do I login > as a postgres user with special privileges) > > -----Original Message----- > From: Tom Lane [mailto:tgl@sss.pgh.pa.us] > Sent: Wednesday, September 06, 2006 6:41 AM > To: Sriram Dandapani > Cc: pgsql-admin@postgresql.org > Subject: Re: [ADMIN] transactoin id wraparound problem > > "Sriram Dandapani" <sdandapani@counterpane.com> writes: >> I get error messages on the console that says >> WARNING: database "xxx" must be vacuumed within 10094646 transactions > >> I shutdown, restart pg and issue a vacuumdb -f <database> > > The shutdown/restart was a waste of typing, and -f doesn't really help > here either. > >> I still get messages saying database must be vacuumed and the >> transaction count keeps decreasing. > > I'll bet a nickel you did not run the vacuum as superuser. It has to be > done by a superuser to have permission to vacuum every table in the > database (including system catalogs). > > regards, tom lane > > ---------------------------(end of broadcast)--------------------------- > TIP 2: Don't 'kill -9' the postmaster >
I created a super user using the createuser command and issued vacuumdb -f -U superuser <database-name> I still keep getting a decreasing transaction count warning. Am I doing something wrong here.(The database is about 120G and while I do expect vacuum full to take time, I expect the warning to show an increasing count) Regards Sriram -----Original Message----- From: Tom Lane [mailto:tgl@sss.pgh.pa.us] Sent: Wednesday, September 06, 2006 6:41 AM To: Sriram Dandapani Cc: pgsql-admin@postgresql.org Subject: Re: [ADMIN] transactoin id wraparound problem "Sriram Dandapani" <sdandapani@counterpane.com> writes: > I get error messages on the console that says > WARNING: database "xxx" must be vacuumed within 10094646 transactions > I shutdown, restart pg and issue a vacuumdb -f <database> The shutdown/restart was a waste of typing, and -f doesn't really help here either. > I still get messages saying database must be vacuumed and the > transaction count keeps decreasing. I'll bet a nickel you did not run the vacuum as superuser. It has to be done by a superuser to have permission to vacuum every table in the database (including system catalogs). regards, tom lane
On Wed, Sep 06, 2006 at 06:53:28AM -0700, Sriram Dandapani wrote: > Do you mean that I login as say root and issue a vacuumdb (or do I login > as a postgres user with special privileges) Probably you want vacuumdb -U postgres -a The -a tells it to do all databases, and the -U postgres tells it to connect as the postgresql superuser. If your superuser is something other than postgres (sometimes it is -- pgsql for instance -- but rarely), you should use that username instead. This is the _database_ user name, not the UNIX username (which is what root would be). You'll need the (postgresql) password for the account in question to do this. A -- Andrew Sullivan | ajs@crankycanuck.ca In the future this spectacle of the middle classes shocking the avant- garde will probably become the textbook definition of Postmodernism. --Brad Holland
On Wed, Sep 06, 2006 at 09:48:45AM -0700, Sriram Dandapani wrote: > Is there a quick way(other than vacuum full) to re-init the transaction > ids. (I can afford some downtime) You don't need a vacuum full. You just need a bog-standard vacuum, but you need it _on every database_. A -- Andrew Sullivan | ajs@crankycanuck.ca The fact that technology doesn't work is no bar to success in the marketplace. --Philip Greenspun
Change the -f to -a On Wed, 2006-09-06 at 11:02, Sriram Dandapani wrote: > I created a super user using the createuser command and issued > > vacuumdb -f -U superuser <database-name> > > I still keep getting a decreasing transaction count warning. Am I doing > something wrong here.(The database is about 120G and while I do expect > vacuum full to take time, I expect the warning to show an increasing > count)
I started this about a few hours ago (I guess the message shows as a general warning)..I am only interested in the specific database..will this command NOT do a full vacuum of the specific database(I would like to save the few hours that I invested in this vacuum command if possible) -----Original Message----- From: Scott Marlowe [mailto:smarlowe@g2switchworks.com] Sent: Wednesday, September 06, 2006 10:35 AM To: Sriram Dandapani Cc: Tom Lane; pgsql-admin@postgresql.org Subject: Re: [ADMIN] transactoin id wraparound problem Change the -f to -a On Wed, 2006-09-06 at 11:02, Sriram Dandapani wrote: > I created a super user using the createuser command and issued > > vacuumdb -f -U superuser <database-name> > > I still keep getting a decreasing transaction count warning. Am I doing > something wrong here.(The database is about 120G and while I do expect > vacuum full to take time, I expect the warning to show an increasing > count)
Transaction ID wraparound is a cluster issue, not an individual database issue. Due to the way PostgreSQL is designed, you need to vacuum ALL your databases, but you don't need a FULL vacuum on them all, just a regular vacuum. I'm guessing that your other databases aren't real big anyway, so it shouldn't be a big problem. If your other databases are huge, then yeah, it might take a while. On Wed, 2006-09-06 at 12:39, Sriram Dandapani wrote: > I started this about a few hours ago (I guess the message shows as a > general warning)..I am only interested in the specific database..will > this command NOT do a full vacuum of the specific database(I would like > to save the few hours that I invested in this vacuum command if > possible) > > -----Original Message----- > From: Scott Marlowe [mailto:smarlowe@g2switchworks.com] > Sent: Wednesday, September 06, 2006 10:35 AM > To: Sriram Dandapani > Cc: Tom Lane; pgsql-admin@postgresql.org > Subject: Re: [ADMIN] transactoin id wraparound problem > > Change the -f to -a > > On Wed, 2006-09-06 at 11:02, Sriram Dandapani wrote: > > I created a super user using the createuser command and issued > > > > vacuumdb -f -U superuser <database-name> > > > > I still keep getting a decreasing transaction count warning. Am I > doing > > something wrong here.(The database is about 120G and while I do expect > > vacuum full to take time, I expect the warning to show an increasing > > count)
Thanks Is there a way to monitor vacuum progress. Can I resume normal operations assuming vacuum will update the transaction ids or should I wait till it finishes. -----Original Message----- From: Scott Marlowe [mailto:smarlowe@g2switchworks.com] Sent: Wednesday, September 06, 2006 10:57 AM To: Sriram Dandapani Cc: Tom Lane; pgsql-admin@postgresql.org Subject: RE: [ADMIN] transactoin id wraparound problem Transaction ID wraparound is a cluster issue, not an individual database issue. Due to the way PostgreSQL is designed, you need to vacuum ALL your databases, but you don't need a FULL vacuum on them all, just a regular vacuum. I'm guessing that your other databases aren't real big anyway, so it shouldn't be a big problem. If your other databases are huge, then yeah, it might take a while. On Wed, 2006-09-06 at 12:39, Sriram Dandapani wrote: > I started this about a few hours ago (I guess the message shows as a > general warning)..I am only interested in the specific database..will > this command NOT do a full vacuum of the specific database(I would like > to save the few hours that I invested in this vacuum command if > possible) > > -----Original Message----- > From: Scott Marlowe [mailto:smarlowe@g2switchworks.com] > Sent: Wednesday, September 06, 2006 10:35 AM > To: Sriram Dandapani > Cc: Tom Lane; pgsql-admin@postgresql.org > Subject: Re: [ADMIN] transactoin id wraparound problem > > Change the -f to -a > > On Wed, 2006-09-06 at 11:02, Sriram Dandapani wrote: > > I created a super user using the createuser command and issued > > > > vacuumdb -f -U superuser <database-name> > > > > I still keep getting a decreasing transaction count warning. Am I > doing > > something wrong here.(The database is about 120G and while I do expect > > vacuum full to take time, I expect the warning to show an increasing > > count)
On Wed, Sep 06, 2006 at 12:23:01PM -0700, Sriram Dandapani wrote: > Thanks > > Is there a way to monitor vacuum progress. Can I resume normal > operations assuming vacuum will update the transaction ids or should I > wait till it finishes. That depends on how many transactions you think will happen while vacuum is running. If it's a lot, you could pass the fatal point (== you lose data) before vacuum finishes. In most cases, it's probably ok, but only you can decide how dangerous it would be for you to lose data. A -- Andrew Sullivan | ajs@crankycanuck.ca This work was visionary and imaginative, and goes to show that visionary and imaginative work need not end up well. --Dennis Ritchie
On Wed, 2006-09-06 at 14:23, Sriram Dandapani wrote: > Thanks > > Is there a way to monitor vacuum progress. Can I resume normal > operations assuming vacuum will update the transaction ids or should I > wait till it finishes. As Andrew mentioned, there's the possibility of wrapping before vacuum finishes. Other than that, there's no danger to continuing to use the db while it's being vacuumed. There was a post here a little while back that had a query that would show you how many transactions you had until wraparound and which databases had been vacuumed when... The basic data is in pg_database. Take a look at that table.
Andrew Sullivan <ajs@crankycanuck.ca> writes: > On Wed, Sep 06, 2006 at 12:23:01PM -0700, Sriram Dandapani wrote: >> Is there a way to monitor vacuum progress. Can I resume normal >> operations assuming vacuum will update the transaction ids or should I >> wait till it finishes. > That depends on how many transactions you think will happen while > vacuum is running. If it's a lot, you could pass the fatal point (== > you lose data) before vacuum finishes. He's apparently running 8.1 (I don't think that error message wording exists in older releases), so what would happen is that the DB will refuse to start new transactions; but there won't be any data lost. Still it might be best to let the vacuum have all the machine's cycles to get done as fast as possible. regards, tom lane
What exactly permissions are required to vacuum every table including system catalogs ? Thanks , -alex -----Original Message----- From: pgsql-admin-owner@postgresql.org [mailto:pgsql-admin-owner@postgresql.org] On Behalf Of Tom Lane Sent: Wednesday, September 06, 2006 9:41 AM To: Sriram Dandapani Cc: pgsql-admin@postgresql.org Subject: Re: [ADMIN] transactoin id wraparound problem "Sriram Dandapani" <sdandapani@counterpane.com> writes: > I get error messages on the console that says > WARNING: database "xxx" must be vacuumed within 10094646 transactions > I shutdown, restart pg and issue a vacuumdb -f <database> The shutdown/restart was a waste of typing, and -f doesn't really help here either. > I still get messages saying database must be vacuumed and the > transaction count keeps decreasing. I'll bet a nickel you did not run the vacuum as superuser. It has to be done by a superuser to have permission to vacuum every table in the database (including system catalogs). regards, tom lane ---------------------------(end of broadcast)--------------------------- TIP 1: if posting/reading through Usenet, please send an appropriate subscribe-nomail command to majordomo@postgresql.org so that your message can get through to the mailing list cleanly
On Wed, 2006-09-06 at 16:06, Sriram Dandapani wrote: > Curious why autovacuum does not handle this problem. Here are my > settings > > max_fsm_pages = 2000000 > > autovacuum = on # enable autovacuum > > autovacuum_naptime = 300 # time between autovacuum runs, > in > > autovacuum_vacuum_threshold = 10000 # min # of tuple updates before > # vacuum > autovacuum_analyze_threshold = 1500 # min # of tuple updates before > # analyze > autovacuum_vacuum_scale_factor = 0.4 # fraction of rel size before > # vacuum > autovacuum_analyze_scale_factor = 0.2 # fraction of rel size before > # analyze > autovacuum_vacuum_cost_delay = 100 > > autovacuum_vacuum_cost_limit = 1000 > > > The database has a constant rate of about 50-100G a day of data flowing > in which gets deleted after 2 days.(this cycle keeps repeating). There > are historical tables that grow at a rate of 2G-4G a day The most common cause of these problems is that you have long standing transactions that never get closed. Look for some connection to the database(s) (any of them) that are never committed or rolled back. One "idle in transaction" connection from one program can cause this problem. If you're doing your updates and / or deletes individually instead of in transactional batches that will just make it worse.
Curious why autovacuum does not handle this problem. Here are my settings max_fsm_pages = 2000000 autovacuum = on # enable autovacuum autovacuum_naptime = 300 # time between autovacuum runs, in autovacuum_vacuum_threshold = 10000 # min # of tuple updates before # vacuum autovacuum_analyze_threshold = 1500 # min # of tuple updates before # analyze autovacuum_vacuum_scale_factor = 0.4 # fraction of rel size before # vacuum autovacuum_analyze_scale_factor = 0.2 # fraction of rel size before # analyze autovacuum_vacuum_cost_delay = 100 autovacuum_vacuum_cost_limit = 1000 The database has a constant rate of about 50-100G a day of data flowing in which gets deleted after 2 days.(this cycle keeps repeating). There are historical tables that grow at a rate of 2G-4G a day -----Original Message----- From: Scott Marlowe [mailto:smarlowe@g2switchworks.com] Sent: Wednesday, September 06, 2006 12:38 PM To: Sriram Dandapani Cc: Tom Lane; pgsql-admin@postgresql.org Subject: RE: [ADMIN] transactoin id wraparound problem On Wed, 2006-09-06 at 14:23, Sriram Dandapani wrote: > Thanks > > Is there a way to monitor vacuum progress. Can I resume normal > operations assuming vacuum will update the transaction ids or should I > wait till it finishes. As Andrew mentioned, there's the possibility of wrapping before vacuum finishes. Other than that, there's no danger to continuing to use the db while it's being vacuumed. There was a post here a little while back that had a query that would show you how many transactions you had until wraparound and which databases had been vacuumed when... The basic data is in pg_database. Take a look at that table.
Scott Marlowe <smarlowe@g2switchworks.com> writes: > The most common cause of these problems is that you have long standing > transactions that never get closed. That can cause table bloat but it shouldn't have anything to do with XID wraparound problems. My guess is that the vacuum attempts are failing somewhere along the line, and that some perusal of the postmaster log would shed light on the issue. regards, tom lane
"Sriram Dandapani" <sdandapani@counterpane.com> writes: > Curious why autovacuum does not handle this problem. Here are my > settings > autovacuum = on # enable autovacuum Do you have stats_row_level enabled? If not, autovac doesn't work. regards, tom lane
stats_start_collector = on #stats_command_string = off #stats_block_level = off stats_row_level = on Yes...it is on. I have other databases with similar data flow. Havent encountered this issue yet (although I have to watch and vacuum manually to prevent such errors) Which option in the config file should I tweak to see autovacuum logging entries? Thanks Sriram -----Original Message----- From: Tom Lane [mailto:tgl@sss.pgh.pa.us] Sent: Wednesday, September 06, 2006 3:11 PM To: Sriram Dandapani Cc: Scott Marlowe; pgsql-admin@postgresql.org Subject: Re: [ADMIN] transactoin id wraparound problem "Sriram Dandapani" <sdandapani@counterpane.com> writes: > Curious why autovacuum does not handle this problem. Here are my > settings > autovacuum = on # enable autovacuum Do you have stats_row_level enabled? If not, autovac doesn't work. regards, tom lane
Hi I have several such databases to issue vacuum on. If I were to vacuum each table individually, would the transaction id be updated after every table vacuum. Wonder if it is because I have several large partitioned tables that I drop every day that don't get vacuumed enough. -----Original Message----- From: Scott Marlowe [mailto:smarlowe@g2switchworks.com] Sent: Wednesday, September 06, 2006 2:18 PM To: Sriram Dandapani Cc: Tom Lane; pgsql-admin@postgresql.org Subject: RE: [ADMIN] transactoin id wraparound problem On Wed, 2006-09-06 at 16:06, Sriram Dandapani wrote: > Curious why autovacuum does not handle this problem. Here are my > settings > > max_fsm_pages = 2000000 > > autovacuum = on # enable autovacuum > > autovacuum_naptime = 300 # time between autovacuum runs, > in > > autovacuum_vacuum_threshold = 10000 # min # of tuple updates before > # vacuum > autovacuum_analyze_threshold = 1500 # min # of tuple updates before > # analyze > autovacuum_vacuum_scale_factor = 0.4 # fraction of rel size before > # vacuum > autovacuum_analyze_scale_factor = 0.2 # fraction of rel size before > # analyze > autovacuum_vacuum_cost_delay = 100 > > autovacuum_vacuum_cost_limit = 1000 > > > The database has a constant rate of about 50-100G a day of data flowing > in which gets deleted after 2 days.(this cycle keeps repeating). There > are historical tables that grow at a rate of 2G-4G a day The most common cause of these problems is that you have long standing transactions that never get closed. Look for some connection to the database(s) (any of them) that are never committed or rolled back. One "idle in transaction" connection from one program can cause this problem. If you're doing your updates and / or deletes individually instead of in transactional batches that will just make it worse.
Sriram Dandapani wrote: > Hi > I have several such databases to issue vacuum on. If I were to vacuum > each table individually, would the transaction id be updated after every > table vacuum. No, you must issue database-wide vacuums. Single-table vacuums, even if done to each and every table, do not suffice. > Wonder if it is because I have several large partitioned tables that I > drop every day that don't get vacuumed enough. The best time of the day to do the vacuum would be just after one such partition was dropped, one would think. -- Alvaro Herrera http://www.CommandPrompt.com/ PostgreSQL Replication, Consulting, Custom Development, 24x7 support
Ihave a 60Gb db on which I issued a vacuumdb -a call as a superuser. It as been running for about 20 hours. (All client connections have been disabled as well as jobs that access any table). I see 0% cpu utilization. The vacuumdb process by itself takes about 10% memory and postgres uses very little cpu and some io.(10-20%). Is there a way to tell vacuumdb to use more resources so it can run faster? -----Original Message----- From: Alvaro Herrera [mailto:alvherre@commandprompt.com] Sent: Wednesday, September 06, 2006 7:48 PM To: Sriram Dandapani Cc: Scott Marlowe; Tom Lane; pgsql-admin@postgresql.org Subject: Re: [ADMIN] transactoin id wraparound problem Sriram Dandapani wrote: > Hi > I have several such databases to issue vacuum on. If I were to vacuum > each table individually, would the transaction id be updated after every > table vacuum. No, you must issue database-wide vacuums. Single-table vacuums, even if done to each and every table, do not suffice. > Wonder if it is because I have several large partitioned tables that I > drop every day that don't get vacuumed enough. The best time of the day to do the vacuum would be just after one such partition was dropped, one would think. -- Alvaro Herrera http://www.CommandPrompt.com/ PostgreSQL Replication, Consulting, Custom Development, 24x7 support
Sriram Dandapani wrote: > Ihave a 60Gb db on which I issued a vacuumdb -a call as a superuser. It > as been running for about 20 hours. (All client connections have been > disabled as well as jobs that access any table). I see 0% cpu > utilization. The vacuumdb process by itself takes about 10% memory and > postgres uses very little cpu and some io.(10-20%). > > Is there a way to tell vacuumdb to use more resources so it can run > faster? Maybe you have the vacuum cost delay parameters set? IIRC you should drop vacuum_cost_delay to 0 if you want it to run as fast as possible. -- Alvaro Herrera http://www.CommandPrompt.com/ The PostgreSQL Company - Command Prompt, Inc.
Many thanks for this suggestion. Vacuumdb -U <superuser> -a finished in 3 hours Regards Sriram -----Original Message----- From: Alvaro Herrera [mailto:alvherre@commandprompt.com] Sent: Thursday, September 07, 2006 7:03 AM To: Sriram Dandapani Cc: Scott Marlowe; Tom Lane; pgsql-admin@postgresql.org Subject: Re: [ADMIN] transactoin id wraparound problem Sriram Dandapani wrote: > Ihave a 60Gb db on which I issued a vacuumdb -a call as a superuser. It > as been running for about 20 hours. (All client connections have been > disabled as well as jobs that access any table). I see 0% cpu > utilization. The vacuumdb process by itself takes about 10% memory and > postgres uses very little cpu and some io.(10-20%). > > Is there a way to tell vacuumdb to use more resources so it can run > faster? Maybe you have the vacuum cost delay parameters set? IIRC you should drop vacuum_cost_delay to 0 if you want it to run as fast as possible. -- Alvaro Herrera http://www.CommandPrompt.com/ The PostgreSQL Company - Command Prompt, Inc.