Best practise for upgrade of 24GB+ database

Поиск
Список
Период
Сортировка
От francis picabia
Тема Best practise for upgrade of 24GB+ database
Дата
Msg-id CA+AKB6FCmzst0vtMeu7xbemXeFeBt-_5TUnMdEB0r_pgsrVheA@mail.gmail.com
обсуждение исходный текст
Ответы Re: Best practise for upgrade of 24GB+ database  ("Nicholson, Brad (Toronto, ON, CA)" <bnicholson@hp.com>)
Список pgsql-admin
In an academic setting, we have a couple of larger than typical
Postgres databases.
One for moodle is now 15GB and another for a research project is
currently 24 GB.

I notice while upgrading Postgresql in Debian from 8.3 to 8.4, the downtime
on the 24 GB research database is extensive while using pg_upgradecluster

It has now been 26 hours of downtime for the database, and about 18GB of
the 24GB is recovered into the 8.4 destination so far.

I read some of the tips on the Postgresql wiki on performance tweaks
( http://wiki.postgresql.org/wiki/Tuning_Your_PostgreSQL_Server )
and had implemented some improvements such as shared_buffers
in the 8.3 instance prior to the upgrade.  I thought if I was doing this
again, I would have found the source postgresql.conf used by
the pg_upgradecluster script for 8.4, and tuned it prior to the run.

How do others manage larger database upgrades while minimizing
downtime?  Do you avoid pg_upgradecluster and simply do a pg_restore
from a dump made prior to the upgrade?  Do you run a replication
and then resync it after the upgrade is complete?  Googling for info
on this I've only found remarks about it taking longer than you'd expect.

В списке pgsql-admin по дате отправления:

Предыдущее
От: "Kevin Grittner"
Дата:
Сообщение: Re: Unable to Connect to server
Следующее
От: "Nicholson, Brad (Toronto, ON, CA)"
Дата:
Сообщение: Re: Best practise for upgrade of 24GB+ database