[Top][All Lists]

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: checking in links to source control

From: Edward Peschko
Subject: Re: checking in links to source control
Date: Fri, 14 Sep 2001 18:36:53 -0700

On Fri, Sep 14, 2001 at 06:11:15PM -0400, Greg A. Woods wrote:
> [ On Friday, September 14, 2001 at 14:13:20 (-0700), Edward Peschko wrote: ]
> > Subject: Re: checking in links to source control
> >
> > Sorry you haven't heard the term. Its pretty common in OO - here the things
> > being serialized are objects, they are being serialized into bytes, and they
> > are being reconstructed by a remote host.
> I generally go for a copy of one of my many dictionaries, or some
> encyclopedia, or even, to learn the meaning of something.
> However the specific case of "object serialisation" you described is not
> really applicable outside the OO paradigm, nor to something that's
> primarily attribute control, or even the initiation of actions to
> instantiate attributes.  Changing an object representation into a
> canonical byte stream is quite a bit different conceptually than
> representing the desired state of file attributes, or actions that
> change file attributes, or desired symlink locations and their targets.

hell, I think they are quite similar..

symlinks are something that can't be stored directly in file format. Neither
are directories. Nor are permissions. You serialize them into something 
different (a file), and then unserialize them to get back what you want.

the methodology is quite similar to me...

> How about we just start with a simple list of little (and not so little)
> languages in which you could implement a solution to creating and
> managing symlinks:
>       make, gmake, sh, csh, ksh, perl, python, tcl, ruby, pike, C,
>       STk, drscheme, elisp, elk, java, smalltalk, lisp, c++

I'm talking about at the level of the *symlink* not the level of the tool.
Of course there are lots of tools to handle creating a symlink. There are
hundreds of tools allowing the creation and manipulation of files (cp, rdist,
scp, rcp, etc. etc., etc). Does that mean that we shouldn't allow files into

> That's not to mention custom tools more suited directly to doing this
> kind of thing (but otherwise more often used to automate sysadmin tasks)
> such as cfengine, install, and mtree (and no doubt others).  Obviously
> you could write yet another similar tool in the language of your choice
> too.

yeah, and  I don't want to deal with any of them. And they don't grok certain
OSes very well (*windows*.. cough cough)

And in any case, the more rapid the development environment, and the less hoops
you have to jump through, the quicker things get done.

> I'm sure you can come up with solutions suitable for your specific needs
> and implemented in your language of choice.  The very simplest though is
> a simple shell statement that'll create one symlink:
>       #! /bin/sh
>       ln -fs target link

Like I said, been here, done that. I could make a shell script in every single
directory whose job is to seek out the perl lib directory and make a link
to it if it ain't there. I'd rather do nothing. I could make a shell script
that, when reverting back to an old version of a directory, does a cvs diff
between the old version's directory and the new, and removes all files that 
don't exist in the new version. yee-haw. I'd rather do nothing.

And so on, and so on, and so on. Its called 'up front effort': the reason
you bundle things into a tool like this is for ease of use. Its a lot of 
effort at first, yeah, but after that initial effort you do nothing at all.

> Personally I'd probably just write a little script in whatever language
> seemed most convenient at the time and instruct users to run it after
> checking out the working directory or after unpacking a source release.

yeah, of course, you would. I'd rather not. I'd rather do nothing, and have
them do nothing. I'd rather leverage the power that attributes and serialization
would give to handle directories and binary files (correctly) and get the
power to check in symlinks, resource forks and what have you.

And after that, I'd like the ability to get fancy - to have translation between
symlinks and shortcuts. You say it isn't feasible. I say it is, through the 
concept of primary and secondary level attributes:

Say you check in a symlink on unix. It puts an entry into the RCS file that
looks like:

@link:  \

Now, you check it out on NT. Shortcuts on NT are a different story from links
however, so you have a *translation table* which turns a link into a default 

To wit: *if* (and only if) there is no @shortcut attribute, and *if* there is 
a @link attribute, use that link attribute to create a default shortcut.

Now,  in all likelihood, the shortcut that is created by cvs will not look
totally correct. So the NT user will edit the thing that is created. When this
is then put back into source control , the attributes will look like:

        run=normal window

Now someone comes along and edits it on the mac. Again, it goes through
the same steps, and comes up with an alias, so your file would look like:


And so forth. Now you have three systems with 'equivalent' links/shortcuts/etc
How are they equivalent? The *user* defines the way that they are equivalent.

If you were to do this in make files, not only would you have to get make,
you'd have to have messy if logic to tell you which platform you are running
on, or have parallel build processes. I've done that before, and it ain't 

> You're not serializing anything here -- you're managing attributes, and
> in particular ensuring that they're in a specific state.  The easiest
> way to do that is to put them into the desired state, by force if
> necessary.

of course I'm doing serialization. That's the definition of serialization; 
packing up something into a state where it can be transmitted (and stored) and
then doing the reverse packing somewhere else. We really are talking apples 
and apples here - you are doing serialization through Makefiles, (or what have 
you) I'm just standardizing it through CVS.

> Once upon a time when I was using CVS to track changes to system related
> files I wrote an ~3500 line makefile and a bunch of little shell scripts
> to assist.  That stuff managed about 280 files and symlinks, including
> installing them in their correct destinations and ensuring they had the
> correct ownerships, permissions, etc.

And you wouldn't have to write a single line of code if we did this. Now,
tell me which one is easier.

> Today I think I'd use cfengine or something similar (it wasn't available
> back when I started my makefile).
> In the mean time I've gone back to simply using SCCS or RCS on a
> per-file basis and using 'mtree' to manage ownerships and permissions.
> I don't use symlinks that much so generally I don't have to try to track
> changes in them very often.

Again, it shows me a deficiency in the tool. Why not CVS, again? I could see
*CVS* and mtree as being a good solution..

> > Hence, the need for cvs to take in pluggable modules. Apache faced this 
> > very 
> > issue when they were starting out; people wanted apache to do things that 
> > weren't 'standard', hence they came up with the module scheme... works 
> > pretty
> > well.
> CVS already has pluggable modules.  They're invoked from the
> CVSROOT/*info files.  They don't do what you seem to want though....

I was thinking more along the '' level. Although I'll take a look 
at the info files though..

> If I read your intentions correctly you're also ignoring some
> fundamental change management issues too.  Remember what I said before
> about the requirement to handle a situation where a name in the
> hierarchy transitions from being a link to being a file and then back to
> being a link, perhaps with the links having several different targets
> just as the file might have different content at different times?  You
> need to ensure that all those changes can be tagged, diffed, merged,
> etc. just as I can do with my scripts or makefiles or mtree data files.

well yeah, if I read you right, that's caveat emptor. Personally, I'd be happy
just to see if the file changed from being a link to being a file and then back,
but if you want to get more fancy, you could say:


or better yet follow the link and use it's time of entry to figure out which
version to do the diff for. Hence,

diff -r1.2 -r1.6 -h <link>

could follow the links for both and do a diff on the resulting files, wheras

diff -r1.2 -r1.6 <link>

wouldn't follow the links. (I take -h from the similar tar command) So that's 
pretty easy to overcome. 

> Oh, but it is far easier to manage file attributes and create symlinks
> and such from script or makefile.  You've apparently only scratched the
> surface of the issue of trying to integrate such functionality of very
> questionable use into CVS.

Who says? You say its questionable, you've got a preference for makefiles. 
I don't. Personally I think that the reduction of a process that took 3800 lines
of script to one that takes exactly 0 would be enough for you to show that it
would be of value. 

And personally I think the added benefit of showing revisioning on these files,
and having it integrated into one system would be more than enough to show
that it would be of value. And of course, not having to learn *yet another 
scripting language* like cfengine (plus the maintenance && training time that
implies), is enough to show that it has value.

> > There's another discussion going on right now
> > about a replacement for CVS simply because it doesn't handle binary data 
> > well.
> I think you're seriously confused.  CVS doesn't need replacing.  It does
> what it was designed to do reasonably well.

yeah, right. There are two ways of vieweing the world - "I'm right, and 
everybody else is wrong.". Or - "you know what, people might have a point here,
and I should change". I like the second viewpoint far better than the first.
Looking over the archive, this issue has come up again and again, and again.

And every single time this philosophical address@hidden has been pushed as a 
into people's faces.

Anyhow, its not *me* who is seriously confused, or who even made the original
point. Its a user of *cvs* who did. Listen to your customers.

> Similarly CVS doesn't need to become a build system, even for the most
> basic task of trying to manage file attributes or symlinks.  That's the
> job of your build system.

Again, this forces down a person's throat your preconception of what a build
process is and what a project is. That is *not your job*. That is the job
of the project's *owner*. 


ps - why are you avoiding the whole issue of the patches? I want to know who
I'd sumbit a patch to, how I would get it incorporated into CVS, etc. etc. etc.
This is a separate issue besides the whole attributes thing - although yes,
my first patch to cvs would probably address this.

So - out of simple courtesy - how do you do it?

pps - I asked for feedback on why this wouldn't work, you have given me some
(although a lot of it is still philosophical). IMO, I have demonstrated that 
all of the points that you mention are surmountable from a technical point
of view. So, keep giving it your best shot.. ;-)

ppps - btw, I checked out cfengine, and for most projects out there, cfengine
is *way* overkill. Talk about the 'right tool for the job'... I hope you didn't
refer Luke to *that*(!). I've implemented something like cfengine in perl --
and IMO its a hell of a lot more maintainable and simple. But then again, I 
might be biased...

reply via email to

[Prev in Thread] Current Thread [Next in Thread]