The Hermit Hacker <scrappy@hub.org> writes:
>> Tom Ivar Helbekkmo <tih+mail@Hamartun.Priv.NO> writes:
>>>> Works fine for me, anyway. I'm running CVS 1.7.3 over RCS 5, and
>>>> it's pulling the PostgreSQL distribution in as I type.
I'm at the same point using cvs 1.9 and rcs 5.7. I also see the
bug that individual files are checked out with permissions 666.
(I've seen the same thing with Mozilla's anon CVS server, BTW.
So if it's a server config mistake rather than an outright CVS bug,
then at least Marc is in good company...)
> Odd...it was doing a 'second checkout' that screwed me, where i
> didn't think it worked...try doing 'cvs -d <> checkout -P pgsql' and tell
> me what that does...
I'd expect that to choke, because you've specified a nonexistent
repository...
Why would you need to do a second checkout anyway? Once you've got
a local copy of the CVS tree, cd'ing into it and saying "cvs update"
is the right way to pull an update.
BTW, "cvs checkout" is relatively inefficient across a slow link,
because it has to pull down each file separately. The really Right Way
to do this (again stealing a page from Mozilla) is to offer snapshot
tarballs that are images of a CVS checkout done locally at the server.
Then, people can pull a fresh fileset by downloading the tarball, and
subsequently use "cvs update" within that tree to grab updates.
In other words, the snapshot creation script should go something like
rm -rf pgsql
cvs -d :pserver:anoncvs@postgresql.org:/usr/local/cvsroot co pgsql
tar cvfz postgresql.snapshot.tar.gz pgsql
I dunno how you're doing it now, but the snapshot does not contain
the CVS control files so it can't be used as a basis for "cvs update".
regards, tom lane
PS: for cvs operations across slow links, the Mozilla guys recommend
-z3 (eg, "cvs -z3 update") to apply gzip compression to the data being
transferred. I haven't tried this yet but it seems like a smart idea,
especially for a checkout.