Re: minimizing downtime when upgrading

Поиск
Список
Период
Сортировка
От Alban Hertroys
Тема Re: minimizing downtime when upgrading
Дата
Msg-id 44993CAF.3090905@magproductions.nl
обсуждение исходный текст
Ответ на Re: minimizing downtime when upgrading  (Kenneth Downs <ken@secdat.com>)
Ответы Re: minimizing downtime when upgrading
Список pgsql-general
Kenneth Downs wrote:
> Richard Huxton wrote:
>
>> Kenneth Downs wrote:
>>
>>> AFAIK it has always been the case that you should expect to have to
>>> dump out your databases and reload them for version upgrades.
>>>
>>> Is anybody over at the dev team considering what an onerous burden
>>> this is?  Is anyone considering doing away with it?

Is there any good reason not to invest in having a second database
server? That way you could upgrade the slave server, switch that to be
the master server and replicate the data (using Slony-I, for example) to
the slave (formerly master) server.

It provides other benefits as well, like the ability to stay up during
system maintenance, load balancing, etc.

> Kind of gets to the heart of things, though, doesn't it.
>
> It's the non-trivial stuff where we look to the machine to help us out.
> As a user of PostgreSQL, I benefit from a lot of things.  I gain a total
> advantage of "X" units of time/money.  Then its time to upgrade and I
> have to give a lot of it back.  The more I use the package, the more
> non-trivial is my upgrade, and the more I give back.
> Regardless of whether a package is commercial or free, it strikes me as
> counter to the very soul of programming to build in a burden that
> increases with the user's use of the program, threatening even to tip
> the balance altogether away from its use.  This seems to be the very
> kind of feature that you want to programmatically control precisely
> because it is non-trivial.

Which is why you have to use the pg_dump from the new version to dump
your data, so it will be compatible with the new database on restore.
That's a good example of this already being the case.

Your real burden isn't the (possible!) data incompatibility between
major versions, but the fact that your data grows. The more data you
have, the more time a dump/restore will take.

You could attempt to just upgrade and hope your data can be interpreted
by a newer major version (you should dump first, of course). You'll want
to have some kind of checksums over your data to check if everything
went well.
This method can't be expected to always work, that'd be near impossible
to guarantee. There'll be changes to data structures (for the better),
for example. I suppose the developers could give some estimate about
your chances...

As mentioned, with a replicated setup your trouble should be minimal.

P.S. We don't use replication as of yet, but we probably will soon.
--
Alban Hertroys
alban@magproductions.nl

magproductions b.v.

T: ++31(0)534346874
F: ++31(0)534346876
M:
I: www.magproductions.nl
A: Postbus 416
    7500 AK Enschede

// Integrate Your World //

В списке pgsql-general по дате отправления:

Предыдущее
От: Martijn van Oosterhout
Дата:
Сообщение: Re: minimizing downtime when upgrading
Следующее
От: "H.J. Sanders"
Дата:
Сообщение: Re: minimizing downtime when upgrading