Обсуждение: How overcome wait for vacuum full?
Dear Folks, I am running a full vacuum on a database. It's taking longer than I hoped. In particular, the vacuum still hasn't reached the table that will benefit most from the vacuum. Can I move the existing table to a backup, make a copy of the table back to its original name, restart the application, and run the vacuum on the backup? Or can I reclaim the disk space by dropping the original after making a copy? If I can, without losing data, what is the best way to do that? I am reaching the end of the period when this database application can be disabled. I have never run vacuum full on such a large database (24G) and am desperate to get the application back online. -- Nick Urbanik RHCE http://nicku.org nicku@nicku.org GPG: 7FFA CDC7 5A77 0558 DC7A 790A 16DF EC5B BB9D 2C24 ID: BB9D2C24
Вложения
Nick Urbanik wrote: > Dear Folks, > > I am running a full vacuum on a database. It's taking longer than I > hoped. In particular, the vacuum still hasn't reached the table that > will benefit most from the vacuum. > > Can I move the existing table to a backup, make a copy of the table > back to its original name, restart the application, and run the vacuum > on the backup? Or can I reclaim the disk space by dropping the > original after making a copy? > > If I can, without losing data, what is the best way to do that? > I am reaching the end of the period when this database application can > be disabled. > > I have never run vacuum full on such a large database (24G) and am > desperate to get the application back online. Instead of waiting a month for the time when you can take the application offline (thus accumulating a month's worth of dead tuples), run a non-full vacuum more often (say, once a day or more). It doesn't lock the table so the app can continue to be online while it runs. If you have too many dead tuples and are desperate to get the table in a reasonable non-bloated state, try CLUSTER instead of VACUUM FULL. It might finish faster. -- Alvaro Herrera http://www.CommandPrompt.com/ PostgreSQL Replication, Consulting, Custom Development, 24x7 support
Nick Urbanik wrote: > I am running a full vacuum on a database. It's taking longer than I > hoped. In particular, the vacuum still hasn't reached the table that > will benefit most from the vacuum. > > Can I move the existing table to a backup, make a copy of the table > back to its original name, restart the application, and run the vacuum > on the backup? Or can I reclaim the disk space by dropping the > original after making a copy? > > If I can, without losing data, what is the best way to do that? > I am reaching the end of the period when this database application can > be disabled. If you're waiting on vacuum to get around to the one table, you can run VACUUM FULL <tablename> instead. Personally I've found that dropping indices (including the PK), then vacuum full, then recreating indices can be an awful lot faster than just leaving vacuum full to its own devices. Yes you ought to be able to reclaim disk space by the copy/drop original/rename procedure, but that gets hard to manage if you have triggers or foreign keys on the table in question. You might want to use CREATE TABLE table_copy (LIKE orig_table) to preserve column defaults if you go down that path. HTH, Geoff
Thank you Alvaro, On 15/05/07 18:53 -0400, Alvaro Herrera wrote: >Instead of waiting a month for the time when you can take the >application offline (thus accumulating a month's worth of dead tuples), >run a non-full vacuum more often (say, once a day or more). It doesn't >lock the table so the app can continue to be online while it runs. Yes, it is vacuumed non-full regularly. However, for some reason, only a full vacuum will recover the space. >If you have too many dead tuples and are desperate to get the table in a >reasonable non-bloated state, try CLUSTER instead of VACUUM FULL. It >might finish faster. 1. If I am running vacuum full on the database as: vacuum full verbose; without specifying a table, if I interrupt it, will the vacuuming of the other tables that have already been vacuumed be undone? How far is vacuuming rolled back if I interrupt it in this case? 2. If I interrupt the vacuum, and run cluster, how long do you think cluster might take relative to a full vaccuum? Is it likely to save disk space? (since that is the aim rather than speeding things up) The value of reltuples and relpages for this table are: reltuples: 572208, relpages 187502 3. This is PostgreSQL 7.3.8. -- Nick Urbanik RHCE http://nicku.org nicku@nicku.org GPG: 7FFA CDC7 5A77 0558 DC7A 790A 16DF EC5B BB9D 2C24 ID: BB9D2C24
Вложения
Thank you Geoff, On 15/05/07 16:11 -0700, Geoff Tolley wrote: >Nick Urbanik wrote: >If you're waiting on vacuum to get around to the one table, you can run >VACUUM FULL <tablename> instead. Clearly that is what I should have done! It has reached the table now. > >Personally I've found that dropping indices (including the PK), then >vacuum full, then recreating indices can be an awful lot faster than >just leaving vacuum full to its own devices. I'll investigate this for next time. >Yes you ought to be able to reclaim disk space by the copy/drop >original/rename procedure, but that gets hard to manage if you have >triggers or foreign keys on the table in question. You might want to >use CREATE TABLE table_copy (LIKE orig_table) to preserve column >defaults if you go down that path. It seems that PostgreSQL 7.3.8 does not support create table table_name like original_table. Can you suggest a way I can create a duplicate table with the same triggers and other stuff that this table has? What is the most efficient way to copy the data? -- Nick Urbanik RHCE http://nicku.org nicku@nicku.org GPG: 7FFA CDC7 5A77 0558 DC7A 790A 16DF EC5B BB9D 2C24 ID: BB9D2C24
Вложения
Nick Urbanik wrote: > It seems that PostgreSQL 7.3.8 does not support create table > table_name like original_table. > > Can you suggest a way I can create a duplicate table with the same > triggers and other stuff that this table has? > > What is the most efficient way to copy the data? Something like: CREATE TABLE mytable_copy AS SELECT * FROM mytable; ALTER TABLE mytable RENAME TO mytable_old; ALTER TABLE mytable_copy RENAME TO mytable; /* Now add primary key, indices, foreign keys, triggers, rules, column defaults to mytable, and you'll probably want to drop the foreign keys at least from mytable_old. I'm afraid you'll have to sort out manually what these should be. */ 7.3.8 is quite seriously old. 7.3.19 is the latest in that tree if you must stick with it. HTH, Geoff
> Now add primary key, indices, foreign keys, triggers, rules, column > defaults to mytable, and you'll probably want to drop the foreign keys > at least from mytable_old. I'm afraid you'll have to sort out manually > what these should be. > */ > > 7.3.8 is quite seriously old. 7.3.19 is the latest in that tree if you > must stick with it. 7.3 is also considered EOL by the project. Time to upgrade. J > > HTH, > Geoff > > ---------------------------(end of broadcast)--------------------------- > TIP 1: if posting/reading through Usenet, please send an appropriate > subscribe-nomail command to majordomo@postgresql.org so that your > message can get through to the mailing list cleanly >
Nick Urbanik wrote: > Thank you Alvaro, > > On 15/05/07 18:53 -0400, Alvaro Herrera wrote: > >Instead of waiting a month for the time when you can take the > >application offline (thus accumulating a month's worth of dead tuples), > >run a non-full vacuum more often (say, once a day or more). It doesn't > >lock the table so the app can continue to be online while it runs. > > Yes, it is vacuumed non-full regularly. However, for some reason, > only a full vacuum will recover the space. You probably need to better configure the FSM settings. See fsm_max_pages in the docs. -- Alvaro Herrera http://www.CommandPrompt.com/ The PostgreSQL Company - Command Prompt, Inc.
Dear Alvaro, On 15/05/07 21:12 -0400, Alvaro Herrera wrote: >Nick Urbanik wrote: >> On 15/05/07 18:53 -0400, Alvaro Herrera wrote: >> >Instead of waiting a month for the time when you can take the >> >application offline (thus accumulating a month's worth of dead tuples), >> >run a non-full vacuum more often (say, once a day or more). It doesn't >> >lock the table so the app can continue to be online while it runs. >> >> Yes, it is vacuumed non-full regularly. However, for some reason, >> only a full vacuum will recover the space. > >You probably need to better configure the FSM settings. See >fsm_max_pages in the docs. This is probably the critical piece of information we need to prevent the need for another sleepless night and an outage! Thank you. -- Nick Urbanik RHCE http://nicku.org nicku@nicku.org GPG: 7FFA CDC7 5A77 0558 DC7A 790A 16DF EC5B BB9D 2C24 ID: BB9D2C24
Вложения
On Wed, May 16, 2007 at 11:20:52AM +1000, Nick Urbanik wrote: > Dear Alvaro, > > On 15/05/07 21:12 -0400, Alvaro Herrera wrote: > >Nick Urbanik wrote: > >>On 15/05/07 18:53 -0400, Alvaro Herrera wrote: > >>>Instead of waiting a month for the time when you can take the > >>>application offline (thus accumulating a month's worth of dead tuples), > >>>run a non-full vacuum more often (say, once a day or more). It doesn't > >>>lock the table so the app can continue to be online while it runs. > >> > >>Yes, it is vacuumed non-full regularly. However, for some reason, > >>only a full vacuum will recover the space. > > > >You probably need to better configure the FSM settings. See > >fsm_max_pages in the docs. > > This is probably the critical piece of information we need to prevent > the need for another sleepless night and an outage! Thank you. http://www.pervasive-postgres.com/instantkb13/article.aspx?id=10087&cNode=5K1C3W might be of use as well. -- Jim Nasby decibel@decibel.org EnterpriseDB http://enterprisedb.com 512.569.9461 (cell)
Dear Jim, On 15/05/07 20:28 -0500, Jim C. Nasby wrote: >On Wed, May 16, 2007 at 11:20:52AM +1000, Nick Urbanik wrote: >> Dear Alvaro, >> >> On 15/05/07 21:12 -0400, Alvaro Herrera wrote: >> >Nick Urbanik wrote: >> >>On 15/05/07 18:53 -0400, Alvaro Herrera wrote: >> >You probably need to better configure the FSM settings. See >> >fsm_max_pages in the docs. >> >> This is probably the critical piece of information we need to prevent >> the need for another sleepless night and an outage! Thank you. > >http://www.pervasive-postgres.com/instantkb13/article.aspx?id=10087&cNode=5K1C3W >might be of use as well. I have not been able to find this page; pervasive have moved their site around. Any keywords in this article I can google for to find the new URL? -- Nick Urbanik RHCE http://nicku.org nicku@nicku.org GPG: 7FFA CDC7 5A77 0558 DC7A 790A 16DF EC5B BB9D 2C24 ID: BB9D2C24
Вложения
Dear Folks, On 24/07/07 05:31 +1000, Nick Urbanik wrote: >Dear Jim, > >On 15/05/07 20:28 -0500, Jim C. Nasby wrote: >>On Wed, May 16, 2007 at 11:20:52AM +1000, Nick Urbanik wrote: >>>Dear Alvaro, >>> >>>On 15/05/07 21:12 -0400, Alvaro Herrera wrote: >>> >Nick Urbanik wrote: >>> >>On 15/05/07 18:53 -0400, Alvaro Herrera wrote: >>> >You probably need to better configure the FSM settings. See >>> >fsm_max_pages in the docs. >>> >>>This is probably the critical piece of information we need to prevent >>>the need for another sleepless night and an outage! Thank you. >> >>http://www.pervasive-postgres.com/instantkb13/article.aspx?id=10087&cNode=5K1C3W >>might be of use as well. > >I have not been able to find this page; pervasive have moved their >site around. Any keywords in this article I can google for to find >the new URL? Thanks: I found a cache of the article, which is entitled, "Q10087 - INFO: Is PostgreSQL remembering what you vacuumed?" http://72.14.253.104/search?q=cache:n_LQiTbm9joJ:www.pervasivepostgres.com/instantkb13/article.aspx%3Fid%3D10087+%22Is+PostgreSQL+remembering+what+you+vacuumed%3F%22&hl=en&ct=clnk&cd=2&gl=au A simplistic summary is to do a full, verbose vacuum of the entire postgresql installation (of all the databases together), and then the last two lines contain output about FSM utilisation. The values of max_fsm_relations and max_fsm_pages should then be set in postgresql.conf in light of this information. -- Nick Urbanik RHCE http://nicku.org nicku@nicku.org GPG: 7FFA CDC7 5A77 0558 DC7A 790A 16DF EC5B BB9D 2C24 ID: BB9D2C24