Обсуждение: Moving database install to new SAN
Hello,
Can someone help me with moving a postgres database install to a new SAN disks. We are getting a new set of faster disk, I would like to find out is there a way, without reinstall to move existing database and data to the new disks with minimal downtime.
Thanks
Abu
Shape Yahoo! in your own image. Join our Network Research Panel today!
On Thu, Sep 13, 2007 at 01:15:34PM -0700, Abu Mushayeed wrote: > Can someone help me with moving a postgres database install to a new > SAN disks. We are getting a new set of faster disk, I would like to > find out is there a way, without reinstall to move existing database > and data to the new disks with minimal downtime. If you cannot move online with some SAN tools, you may simply 1. stop postgres 2. copy whole data directory over to new SAN 3. start postgres with new data directory Nothing tricky there. What size is you db space? What OS are you running PostgreSQL on? Tino. -- www.spiritualdesign-chemnitz.de www.lebensraum11.de Tino Schwarze * Parkstraße 17h * 09120 Chemnitz
Użytkownik Tino Schwarze napisał: > On Thu, Sep 13, 2007 at 01:15:34PM -0700, Abu Mushayeed wrote: > >> Can someone help me with moving a postgres database install to a new >> SAN disks. We are getting a new set of faster disk, I would like to >> find out is there a way, without reinstall to move existing database >> and data to the new disks with minimal downtime. > > If you cannot move online with some SAN tools, you may simply > > 1. stop postgres Not yet... > 2. copy whole data directory over to new SAN If database is big, then I think that is much faster to copy running database to new SAN. Then stop postgres and rsync what was changed - but this will be probably only a few files. > 3. start postgres with new data directory Minimal downtime I think. Best regards -- Andrzej Zawadzki
> 2. copy whole data directory over to new SAN
If database is big, then I think that is much faster to copy running
database
to new SAN.
Then stop postgres and rsync what was changed - but this will be probably
only a few files.
I believe if you do this, you will not get a function database in the end. There is a lot of data that is held in memory/buffers that may not be flushed to the disks. You have no guarantee you will get this data with this method.
Either shut the database down and cold copy it, or setup replication using Slony to move the data in a "hot"mode.
HTH,
Chris
-----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1 Chris Hoover wrote: >>> 2. copy whole data directory over to new SAN >> If database is big, then I think that is much faster to copy running >> database >> to new SAN. >> Then stop postgres and rsync what was changed - but this will be probably >> only a few files. > > > > I believe if you do this, you will not get a function database in the end. > There is a lot of data that is held in memory/buffers that may not be > flushed to the disks. You have no guarantee you will get this data with > this method. The method would work if they are willing to have an outage. Basically you do an initial rsync of the large db... Then you shut down the db. And rsync again, which will be faster than doing a complete move with shutdown. The key here is, if you use this method... there is *zero* way around shutting down the database before the second rsync. Sincerely, Joshua D. Drake - -- === The PostgreSQL Company: Command Prompt, Inc. === Sales/Support: +1.503.667.4564 24x7/Emergency: +1.800.492.2240 PostgreSQL solutions since 1997 http://www.commandprompt.com/ UNIQUE NOT NULL Donate to the PostgreSQL Project: http://www.postgresql.org/about/donate PostgreSQL Replication: http://www.commandprompt.com/products/ -----BEGIN PGP SIGNATURE----- Version: GnuPG v1.4.6 (GNU/Linux) Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org iD8DBQFG8rjmATb/zqfZUUQRAs5bAJ9Avhtd0KgQfta1YqN5IX75M3I0awCdHuna suRxLPewL/v2t4w1t7K3eyM= =0rQC -----END PGP SIGNATURE-----
Le jeudi 20 septembre 2007, Joshua D. Drake a écrit : > Chris Hoover wrote: > >>> 2. copy whole data directory over to new SAN > >> > >> If database is big, then I think that is much faster to copy running > >> database > >> to new SAN. > >> Then stop postgres and rsync what was changed - but this will be > >> probably only a few files. > > > > I believe if you do this, you will not get a function database in the > > end. There is a lot of data that is held in memory/buffers that may not > > be flushed to the disks. You have no guarantee you will get this data > > with this method. > > The method would work if they are willing to have an outage. Basically > you do an initial rsync of the large db... Perhaps using some tablespace to move the bigest data, then rsync ... > > Then you shut down the db. > > And rsync again, which will be faster than doing a complete move with > shutdown. > > The key here is, if you use this method... there is *zero* way around > shutting down the database before the second rsync. > > > Sincerely, > > Joshua D. Drake -- Cédric Villemain Administrateur de Base de Données Cel: +33 (0)6 74 15 56 53 http://dalibo.com - http://dalibo.org