info-cvs
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: CVS and link files


From: Eric Siegerman
Subject: Re: CVS and link files
Date: Mon, 20 Aug 2001 14:40:33 -0400
User-agent: Mutt/1.2.5i

On Mon, Aug 20, 2001 at 07:05:55PM +0200, Pascal Francq wrote:
> On Monday 20 August 2001 18:15, you wrote:
> >Do you mean Unix symbolic links?  If so, what happens is neither
> >of the above.  Unfortunately, CVS just ignores them.
> 
> Yes, I mean symbolic links.

CVS can't do that.

> Here is the situation:
> I have 2 projects: p1 and p2. These projects needs common files c1, c2 and c3.
> Therefore I create a directory containing c1, c2 and c3 and make links to 
> them in each projects directory.
> What I want is to include these links in the cvs tree, so that when someone 
> makes a checkout (or an update) these links are automatically created.

If the common files are in a separate directory -- or, at least,
if they can be put into a separate directory within the
sandboxes, whether or not they're separated in the repo -- then
you can use the CVSROOT/modules file to accomplish this.  Each
sandbox will have its own private copy of the common files, as it
does for the project-specific files, but all the private,
per-sandbox copies for *both* projects will be backed by the same
repository file, and thus, changes made by either project will be
visible to both.

The other commonly-suggested workaround is to have the build
system create the symlinks.

Using the modules file probably makes the most sense in your
case.  If someone needs to change one of the common files, a
symlink would make their changes visible too quickly to the other
sandboxes.  If everyone has their own copy, changes don't become
visible until the changer commits them and the other users
update, which is how things Should Be (TM) in a CVS environment.

CVS should be able to track symlinks, IMO -- but even if it
could, local copies via the modules file would still probably be
best for your specific need.

Of course, the best answer of all from a purist software-
engineering point of view would be to make the common files a
separate "product", even if only internally, with its own q/a,
release management, etc., and with your projects p1 and p2 being
the "customers" for this new product.  Ideal in theory, but
perhaps too much overhead in practice.  In deciding between this
and the modules-file approach, some things to consider are:
  - how many files in this putative new product?
  - how many (internal) customers for it?
  - how often is it expected to change, and how much effect do
    changes have on the customers?

The informal modules-file approach may be fine for the situation
you described (three files shared between two projects).  But it
probably doesn't scale.  I suspect -- with no proof whatsoever --
that as both number of files and number of customers increase,
the complexity of managing it increases at least quadratically,
if not exponentially.  (Not sure what effect frequency-of-change
would have, but I'm sure it does have one.)  Even if it's just
quadratic, though, it doesn't take much of a scale-up before the
extra overhead of a separate-product approach starts to look like
a win.

--

|  | /\
|-_|/  >   Eric Siegerman, Toronto, Ont.        address@hidden
|  |  /
With sufficient thrust, pigs fly just fine. However, this is not
necessarily a good idea.
        - RFC 1925 (quoting an unnamed source)



reply via email to

[Prev in Thread] Current Thread [Next in Thread]