Re: (A) native Windows port

Поиск
Список
Период
Сортировка
От Hannu Krosing
Тема Re: (A) native Windows port
Дата
Msg-id 1026245845.2020.42.camel@rh72.home.ee
обсуждение исходный текст
Ответ на Re: (A) native Windows port  (Lamar Owen <lamar.owen@wgcr.org>)
Ответы Re: (A) native Windows port  (Lamar Owen <lamar.owen@wgcr.org>)
Список pgsql-hackers
On Tue, 2002-07-09 at 22:10, Lamar Owen wrote:
> On Tuesday 09 July 2002 01:46 pm, Hannu Krosing wrote:
> > On Tue, 2002-07-09 at 18:30, Oliver Elphick wrote:
> > > The main problem is getting access to the user data after an upgrade.
> 
> > Can't it be dumped in pre-upgrade script ?
> 
> The pre-upgrade script is run in an environment that isn't robust enough to 
> handle that.  What if you run out of disk space during the dump? 

You can either check beforehand or abort and delete the offending
dumpfile.

> What if a postmaster is running -- and many people stop their postmaster before 
> upgrading their version of PostgreSQL?

It is quite easy to both check for a running postmaster and start/stop
one.
> Besides, at least in the case of the RPM, during OS upgrade time the %pre 
> scriptlet (the one you allude to) isn't running in a system with all the 
> normal tools available.

I don't think that postmaster needs very many normal tools - it should
be quite independent, except for compat  libs for larger version
upgrades

> Nor is there a postmaster running.  Due to a largish 
> RAMdisk, a postmaster running might cause all manners of problems.

I don't know anything about the largish RAMdisk,what I meant was that
postmaster (a 2.7 MB program with ~4 MB RAM footprint) could include the
functionality of pg_dump and be runnable in single-user mode for dumping
old databases. 
> And an error in the scriptlet could potentially cause the OS upgrade to abort 
> in midstream -- not a nice thing to do to users, having a package during 
> upgrade abort their OS upgrade when it is a little over half through, and in 
> an unbootable state.... No, any dumping of data cannot happen during the %pre 
> script -- too many issues there.

But is it not the same with _every_ package ? Is there any actual
upgrading done in the pre/post scripts or are they generally not to be
trusted ?

> > IMHO, if rpm and apt can't run a pre-install script before deleting the
> > old binaries they are going to replace/upgrade then you should complain
> > to authors of rpm and apt.
> 
> Oh, so it's RPM's and APT's problem that we require so many resources during 
> upgrade.... :-)

As you said: "The pre-upgrade script is run in an environment that isn't
robust enough to handle that". Ok, maybe it's the environmental issue
then ;)

But more seriously - it is a DATAbase upgrade, not a usual program
upgrade which has a minuscule data part, usually not more than a
configuration file. Postgres, as a very extensible database, has an
ability to keep much of its functionality in the database.

We already do a pretty good job with pg_dump, but I would still not
trust it to do everything automatically and erase the originals.

If we start claiming that postgresql can do automatic "binary" upgrades
there will be much fun with people who have some application that runs
fine on 7.0.3 but barfs on 7.1.2, even if it is due to stricter
adherence to SQL99 and the SQL is completely out of control or rpm/apt. 

There may be even some lazy people who will think that now is the time
to auto-upgrade from 6.x ;/

> > The right order should of course be
> 
> > 1) run pre-upgrade (pg_dumpall >dumpfile)
> > 2) upgrade
> > 3) run post-upgrade (initdb; psql < dumpfile)
> 
> All but the first step works fine.  The first step is impossible in the 
> environment in which the %pre script runs.

Ok. But would it be impossible to move the old postmaster to some other
place, or is the environment too fragile even for that ?

If we move the old postmaster instead of copying then there will be a
lot less issues about running out of disk space :)

What we are facing here is a problem similar to trying upgrade all users
C programs when upgrading gcc. While it would be a good thing, nobody
actually tries to do it - we require them to have source code and to do
the "upgrade" manually.

That's what I propose - dump all databases in pre-upgrade (if you are
concerned about disk usage, run it twice, first to | wc and then to a
file) and try to load in post-upgrade. 

There will still be some things that are be impossible to "upgrade" like
upgrading a.out "C" functions to elf format backend.

Perhaps we will be able to detect what we can actually upgrade and bail
out if we find something unupgradable ?

-------------------
Hannu




В списке pgsql-hackers по дате отправления:

Предыдущее
От: Peter Eisentraut
Дата:
Сообщение: Re: (A) native Windows port
Следующее
От: Peter Eisentraut
Дата:
Сообщение: Re: Proposal: CREATE CONVERSION