Обсуждение: Best practise for upgrade of 24GB+ database
In an academic setting, we have a couple of larger than typical Postgres databases. One for moodle is now 15GB and another for a research project is currently 24 GB. I notice while upgrading Postgresql in Debian from 8.3 to 8.4, the downtime on the 24 GB research database is extensive while using pg_upgradecluster It has now been 26 hours of downtime for the database, and about 18GB of the 24GB is recovered into the 8.4 destination so far. I read some of the tips on the Postgresql wiki on performance tweaks ( http://wiki.postgresql.org/wiki/Tuning_Your_PostgreSQL_Server ) and had implemented some improvements such as shared_buffers in the 8.3 instance prior to the upgrade. I thought if I was doing this again, I would have found the source postgresql.conf used by the pg_upgradecluster script for 8.4, and tuned it prior to the run. How do others manage larger database upgrades while minimizing downtime? Do you avoid pg_upgradecluster and simply do a pg_restore from a dump made prior to the upgrade? Do you run a replication and then resync it after the upgrade is complete? Googling for info on this I've only found remarks about it taking longer than you'd expect.
> -----Original Message----- > From: pgsql-admin-owner@postgresql.org [mailto:pgsql-admin- > owner@postgresql.org] On Behalf Of francis picabia > Sent: Friday, January 20, 2012 1:12 PM > To: pgsql-admin@postgresql.org > Subject: [ADMIN] Best practise for upgrade of 24GB+ database > > How do others manage larger database upgrades while minimizing > downtime? Do you avoid pg_upgradecluster and simply do a pg_restore > from a dump made prior to the upgrade? Do you run a replication > and then resync it after the upgrade is complete? Googling for info > on this I've only found remarks about it taking longer than you'd > expect. In the past I've used Slony to upgrade much larger database clusters than yours with minimal down time (I'm talking secondsfor the actual master switch). You set up a new replica on the new version and then move the master from old to new. No need to explicitly resync afterwards. Brad.
On Fri, Jan 20, 2012 at 2:45 PM, Nicholson, Brad (Toronto, ON, CA) <bnicholson@hp.com> wrote: > >> -----Original Message----- >> From: pgsql-admin-owner@postgresql.org [mailto:pgsql-admin- >> owner@postgresql.org] On Behalf Of francis picabia >> Sent: Friday, January 20, 2012 1:12 PM >> To: pgsql-admin@postgresql.org >> Subject: [ADMIN] Best practise for upgrade of 24GB+ database >> >> How do others manage larger database upgrades while minimizing >> downtime? Do you avoid pg_upgradecluster and simply do a pg_restore >> from a dump made prior to the upgrade? Do you run a replication >> and then resync it after the upgrade is complete? Googling for info >> on this I've only found remarks about it taking longer than you'd >> expect. > > In the past I've used Slony to upgrade much larger database clusters than yours with minimal down time (I'm talking secondsfor the actual master switch). You set up a new replica on the new version and then move the master from old to new. No need to explicitly resync afterwards. That's great information. 9.0 is introducing streaming replication, so that is another option I'll look into.
francis picabia <fpicabia@gmail.com> wrote: > That's great information. 9.0 is introducing streaming > replication, so that is another option I'll look into. We upgrade multi-TB databases in just a couple minutes using pg_upgrade using the hard-link option. That doesn't count post-upgrade vacuum/analyze time, but depending on your usage you might get away with analyzing a few tables before letting users in, and doing the database-wide vacuum analyze while the database is in use. One of the other options might be better for you, but this one has worked well for us. -Kevin
> -----Original Message----- > From: francis picabia [mailto:fpicabia@gmail.com] > Sent: Friday, January 20, 2012 3:39 PM > To: Nicholson, Brad (Toronto, ON, CA) > Cc: pgsql-admin@postgresql.org > Subject: Re: [ADMIN] Best practise for upgrade of 24GB+ database > > In the past I've used Slony to upgrade much larger database clusters > than yours with minimal down time (I'm talking seconds for the actual > master switch). You set up a new replica on the new version and then > move the master from old to new. No need to explicitly resync > afterwards. > > That's great information. 9.0 is introducing streaming replication, > so that is another option I'll look into. Streaming rep will not help you here as you cannot replicated between versions with it. Slony allows replication to a newversion. Brad.
On Fri, Jan 20, 2012 at 16:45, Nicholson, Brad (Toronto, ON, CA) <bnicholson@hp.com> wrote: > In the past I've used Slony to upgrade much larger database clusters than yours with minimal down time (I'm talking secondsfor the actual master switch). You set up a new replica on the new version and then move the master from old to new. No need to explicitly resync afterwards. Would that work to upgrade from, say, 8.4 to 9.1? Using slony?
On Mon, Jan 23, 2012 at 10:21 AM, Andreas Hasenack <panlinux@gmail.com> wrote: > On Fri, Jan 20, 2012 at 16:45, Nicholson, Brad (Toronto, ON, CA) > <bnicholson@hp.com> wrote: >> In the past I've used Slony to upgrade much larger database clusters than yours with minimal down time (I'm talking secondsfor the actual master switch). You set up a new replica on the new version and then move the master from old to new. No need to explicitly resync afterwards. > > Would that work to upgrade from, say, 8.4 to 9.1? Using slony? It should. As long as both can support the same version of slony you're good to go.