Re: (A) native Windows port

Поиск
Список
Период
Сортировка
От Lamar Owen
Тема Re: (A) native Windows port
Дата
Msg-id 200207091909.19990.lamar.owen@wgcr.org
обсуждение исходный текст
Ответ на Re: (A) native Windows port  (Hannu Krosing <hannu@tm.ee>)
Ответы Re: (A) native Windows port  (Rod Taylor <rbt@zort.ca>)
Re: (A) native Windows port  (Oliver Elphick <olly@lfix.co.uk>)
Re: (A) native Windows port  (Hannu Krosing <hannu@tm.ee>)
Список pgsql-hackers
On Tuesday 09 July 2002 04:17 pm, Hannu Krosing wrote:
> On Tue, 2002-07-09 at 22:10, Lamar Owen wrote:
> > The pre-upgrade script is run in an environment that isn't robust enough
> > to handle that.  What if you run out of disk space during the dump?

> You can either check beforehand or abort and delete the offending
> dumpfile.

And what if you have enough disk space to do the dump, but then that causes 
the OS upgrade to abort because there wasn't enough space left to finish 
upgrading (larger packages, perhaps)?  The system's hosed, and it's our 
fault.

> > What if a postmaster is running -- and many people stop their postmaster
> > before upgrading their version of PostgreSQL?

> It is quite easy to both check for a running postmaster and start/stop
> one.

Not when there is no ps in your path.  Or pg_ctl for that matter.  Nor is 
there necessarily a /proc tree waiting to be exploited.  We're talking the 
anaconda environment, which is tailored for OS installation and upgrading.  
You cannot start a postmaster; you cannot check to see if one is running -- 
you can't even check to see if you're in the anaconda chroot or not, so that 
you can use more tools if not in the OS installation mode.  Again -- the 
total OS upgrade path is a big part of this scenario, as far as the RPM's are 
concerned.  The Debian package may or may not have as grievous a structure.

The only tool you can really use under the anaconda chroot is busybox, and it 
may not do what you want it to.

> > Besides, at least in the case of the RPM, during OS upgrade time the %pre
> > scriptlet (the one you allude to) isn't running in a system with all the
> > normal tools available.

> I don't think that postmaster needs very many normal tools - it should
> be quite independent, except for compat  libs for larger version
> upgrades

The problem there is that you really have no way to tell the system which sets 
of libraries you want.  More to the point: RPM dependencies cannot take 
conditionals and have no concept of if..then.  Nor can you tell the system to 
_install_ the new postgresql instead of _upgrade_ (incidentally, in the RPM 
context an upgrade is an install of the new version followed by an uninstall 
of the old one -- if the new one overwrote files their traces are just wiped 
from the RPM database, if they weren't overwritten, the files get wiped along 
with their respective database entries).  If I could _force_ no upgrades, it 
would be much easier -- but I can't.  Nor can I be sure the %pre scriptlet 
will be run -- some people are so paranoid that they use rpm -U --no-scripts 
religiously.

Thus, when the old postgresql rpm's database entries (in practice virtually 
every old executable gets overwritten) are removed, its dependency 
information is also removed.  As the install/upgrade path builds a complete 
dependency tree of the final installation as part of the process, it knows 
whether the compat libs are needed or not.  If no other program needs them, 
you don't get them, even if you kept an old backend around that does need 
them.  But you really can't make the -server subpackage Require the compat 
packages, because you don't necessarily know what they will be named, or 
anything else they will provide.  If compat libs are even available for the 
version you're upgrading from.

> > Nor is there a postmaster running.  Due to a largish
> > RAMdisk, a postmaster running might cause all manners of problems.

> I don't know anything about the largish RAMdisk,what I meant was that
> postmaster (a 2.7 MB program with ~4 MB RAM footprint) could include the
> functionality of pg_dump and be runnable in single-user mode for dumping
> old databases.

If a standalone backend could reliably dump the database without needing 
networking and many of the other things we take for granted (the install mode 
is a cut-down single-user mode of sorts, running in a chroot of a sort), then 
it might be worth looking at.

> > And an error in the scriptlet could potentially cause the OS upgrade to
> > abort in midstream -- not a nice thing to do to users, having a package

> But is it not the same with _every_ package ? Is there any actual
> upgrading done in the pre/post scripts or are they generally not to be
> trusted ?

No other package is so *different* to require such a complicated upgrade 
process.  Some packages do more with their scriptlets than others, but no 
package does anything near as complicated as dumping a database.  

> We already do a pretty good job with pg_dump, but I would still not
> trust it to do everything automatically and erase the originals.

And that's a big problem.  We shouldn't have that ambivalence.  IOW, I think 
we need more upgrade testing.  I don't think I've seen a cycle yet that 
didn't have upgrade problems.

> If we start claiming that postgresql can do automatic "binary" upgrades
> there will be much fun with people who have some application that runs
> fine on 7.0.3 but barfs on 7.1.2, even if it is due to stricter
> adherence to SQL99 and the SQL is completely out of control or rpm/apt.

That's just us not being backward compatible.  I'm impacted by those things, 
being that I'm running OpenACS here on 7.2.1, when OACS is optimized for 7.1.  
Certain things are very broken.

> There may be even some lazy people who will think that now is the time
> to auto-upgrade from 6.x ;/

And why not?  If Red Hat Linux can upgrade a whole operating environment from 
version 2.0 all the way up to 7.3 (which they claim), why can't we?  If we 
can just give people the tools to deal with potential problems after the 
upgrade, then I think we can do it.  Such a tool as a old-version dumper 
would be a lifesaver to people, I believe.

> Ok. But would it be impossible to move the old postmaster to some other
> place, or is the environment too fragile even for that ?

That is what I have done in the past -- the old backend got copied over (the 
executable), then a special script was run (after upgrade, manually, by the 
user) that tried to pull a dump using the old backend.  It wasn't reliable.  
The biggest problem is that I have no way of insuring that the old backend's 
dependencies stay satisfied -- 'satisfied' meaning that the old glibc stays 
installed for compatibility.  Glibc, after all, is being upgraded out from 
under us, and I can't stop it or even slow it down.

And this could even be that most pathological of cases, where an a.out based 
system is being upgraded to an elf system without a.out kernel support.  
(point of note: PostgreSQL first appeared in official Red Hat Linux as 
version 6.2.1, released with Red Hat Linux 5.0, which was ELF/glibc 
(contrasted to 3.0.3 which was a.out/libc4 and 4.x which was ELF/libc5) -- 
but I don't know about the Debian situation and its pathology.)

> If we move the old postmaster instead of copying then there will be a
> lot less issues about running out of disk space :)

The disk space issue is with the ASCII dump file itself.  Furthermore, what 
happens if the dumpfile is greater than MAXFILESIZE?  Again, wc isn't in the 
path (because it too is being upgraded out from under us -- nothing is left 
untouched by the upgrade EXCEPT that install image RAMdisk, which has a very 
limited set of tools (and a nonstandard kernel to boot).  Networking might or 
might not be available.  Unix domain sockets might or might not be available.

But the crux is that the OS upgrade environment is designed to do one thing 
and one thing alone -- get the OS installed and/or upgraded.  General purpose 
tools just take up space on the install media, a place where space is at a 
very high premium.

> What we are facing here is a problem similar to trying upgrade all users
> C programs when upgrading gcc. While it would be a good thing, nobody
> actually tries to do it - we require them to have source code and to do
> the "upgrade" manually.

Is that directly comparable?  If you have alot of user functions written in C 
then possibly.  But I'm not interested in pathological cases -- I'm
interested in something that works OK for the majority of users.  As long as 
it works properly for users who aren't sophisticated enough to need the 
pathological cases handled, then it should be available.  Besides, one can 
always dump and restore if one wants to.  And just how well does the 
venerable dump/restore cycle work in the presence of these pathological 
cases?

Red Hat Linux doesn't claim upgradability in the presence of highly 
pathological cases (such as rogue software installed from non-RPM sources, or 
non-Red Hat RPM's installed (particularly Ximian Gnome).  So you have to go 
through a process with that.  But it is something you can recover from after 
the upgrade is complete.  That's what I'm after.  I don't hold out hope for a 
fully automatic upgrade -- it would be nice, but we are too extensible for it 
to be practical.  No -- I want tools to be able to recover my old data 
without the old version backend held-over from the previous install.  And I 
think this is a very resonable expectation.

> That's what I propose - dump all databases in pre-upgrade (if you are
> concerned about disk usage, run it twice, first to | wc and then to a
> file) and try to load in post-upgrade.

The wc utility isn't in the path in an OS install situation.  The df utility 
isn't in the path, either.  You can use python, though. :-)  Not that that 
would be a good thing in this context, however.

> There will still be some things that are be impossible to "upgrade" like
> upgrading a.out "C" functions to elf format backend.

If a user is sophisticated enough to write such, that user is sophisticated 
enough to take responsibility for the upgrade.  I'm not talking about users 
of that level here.  But even then, it would be nice to at least get the data 
back out -- the function can then be rebuilt easily enough from source.

> Perhaps we will be able to detect what we can actually upgrade and bail
> out if we find something unupgradable ?

All is alleviated if I can run a utility after the fact to read in my old 
data, without requiring the old packaged binaries.  I don't have to 
workaround ANYTHING.

Again I say -- would such a data dumper not be useful in cases of system 
catalog corruption that prevents a postmaster from starting?  I'm talking 
about a multipurpose utility here, not just something to make my life as RPM 
maintainer easy.

The pg_fsck program is a good beginning to such a program.
-- 
Lamar Owen
WGCR Internet Radio
1 Peter 4:11


В списке pgsql-hackers по дате отправления:

Предыдущее
От: "Michael J. Ditto"
Дата:
Сообщение: Re: pg_access
Следующее
От: Rod Taylor
Дата:
Сообщение: Re: (A) native Windows port