bug-make
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

RE: Not a bug - an offer of code


From: Pete Randall
Subject: RE: Not a bug - an offer of code
Date: Thu, 5 Oct 2000 17:10:38 +0100

Paul:

Thanks for your response. Here's some more information as requested.

> I'm listening :). 
> 
> I've just been swamped, and your message requires much digesting. 

Sorry - I hope this is more digestible... You can skip the passages enclosed
in [...] on the first pass - they just provide more detail.

> First, can you give a more detailed explanation of exactly what you mean 
> by "connecting make to a CM server"?  What kind of connection are you 
> contemplating?  What would be the purpose, features, etc. of such a 
> connection? 
>
> Do you mean something along the lines of they way ClearCase's clearmake 
> stores element version information for each derived object? 

Almost exactly that, though the mechanism is very different from clearmake.

On connection, an MCX client passes context information including the
starting directory, the platform make is running on and CM server specific
options. The options are hardcoded at present, but the set to use could be
selected at runtime without too much trouble.

>From that point, make issues a request through MCX whenever it wants to
update a target and informs MCX of every target it has updated
successfully.

The idea is that you never build with out-of-date source code and that if
someone else has already built a target it fetched (or "gotten" if you
prefer) rather than being rebuilt.

Both "update" and "preserve" requests currently pass the list of unique
prerequisites and a "normalized" command string (which has everything
except the automatic variables and some function results expanded).

[
  This isn't quite what's needed. The full prerequisite list, a normalized
  version of the unexpanded command and a set of variable definitions
  including "pseudo-variables" for function results would be more useful.
  That's one of the reasons the protocol is being reviewed.
]

How the mapping between target paths and the CM system's namespace is done
is left to the CM system. An MCX client supplies context information as
described above, but beyond that it doesn't get involved. 

MCX is intended to support multiple transports (so you could use shared
memory for local connections), but only the TCP/IP sockets transport is
being actively used at the moment.
    
[
  The only other transport implemented is UNIX domain sockets, which was
  frankly more trouble than it was worth.
]

You'll notice from the code that MCX also has calls to do remote job
management. This was added because Dimensions maintains data for doing
this type of thing (though remote job management isn't actually
implemented at present) - it may not be appropriate with other CM systems.

> Also, what sort of method are you considering for maintaining local 
> state?  There are more things that local state would be used for than 
> just rule comparison; actually rule comparison is relatively low on my 
> personal priority list.  However, a method of doing stateful 
> "out-of-date" algorithms is _very_ interesting to me.  So, I'd prefer a 
> method of keeping state which was generic enough to allow it to be 
> expanded with other information than just the command script. 

Stateful "out-of-date" maintenance is my main goal too: rule comparison is
part of that. The problem is that good CM systems are paranoid: unless they
"know" a target was built using a given command and set of prerequisites it
won't make it into the system...

[
  For example, Dimensions matches files on disk against controlled versions
  using modification time, length and checksum combined with simple
  majority voting. The length must always match and either modification
  time or the checksum must match. This avoids unnecessary checksum
  calculation while preventing unnecessary updates because a (controlled)
  file has been touched or the NFS clock skew problem has reared it's ugly
  head :^) The command used to build a target is also considered
  significant. There's no concept of "newer" as such - it's either right or 
  wrong.

  Needless to say, MCX was been designed with this in mind, so the requests
  contains fields for the information. At the moment, all checksum
  calculations are actually done in Dimensions code. That isn't a
  requirement, just an artifact of the current implementation. It ended up
  that way because I couldn't think of an elegant solution when the
  checksum supported by a client differed from the one required by the CM
  server.
]

With a sandbox model, this means you either have to run every make
connected to the CM system or face a full rebuild when your work is done.
With the best will in the world, a "CM aware" build is never going to be as 
fast as a local one - and we both know how developers *hate* to wait :^)
Using local state imposes a smaller penalty and means you can build
"offline" and put back the new targets later.

My initial thoughts have been to use a per-directory database in which
make maintains the data used by MCX, along with a tag if the file is one
the CM system knows about and an update sequence number. A prerequisite
list and command is stored for each file as needed and each prerequisite
is listed with it's update sequence number. All make has to do is check
that the the list and command are the same and that the sequence numbers
match. If not, the target is out-of-date.

[
  It's fairly obvious something more compact than a paths will be needed
  for the prerequisite references (though disk space is cheap these days).
  I don't think using a single database would be appropriate: the
  per-directory approach is more robust if things are accidentally deleted
  and makes it easier to manage building different subsets of large
  packages.
]

> I realize the code is available, and I've downloaded it and will take a 
> look, but explanations can help a lot as well. 

I hope this has helped a bit - if it wasn't what you were after I'm sure
you'll let me know:-) I'm also sorry about the "non-standard" build setup.
I couldn't get the autoconf files that came with 3.74 to work with any
version of autoconf I could find, so I punted and peeled bits out of our
internal build setup. As 3.79.1 is put together with pukka autoconf stuff,
I can avoid this in future - thanks!

> Finally, note that releasing the code under the GPL is necessary, but 
> not sufficient, to allow the code to be integrated into the official 
> version of GNU make.  In addition the copyright of the code must be 
> assigned to the FSF.  The assignee is granted back rights to use the 
> code however he deems fit, but the copyright itself is owned by the FSF. 
> If you have questions about this please contact me and I'll show you a 
> sample.  Or, you can contact RMS directly. 

Assigning copyright on new files that are part of GNU Make itself is no
problem, as they are already covered by the GPL. The MCX library itself
may be trickier: it's currently released under the (now deprecated) Lesser
GPL so we can link it with our commercial code. If you decide to adopt the
GNU Make "MCX binding" code and leave MCX as a separate work, it's a
non-issue. If you consider adopting MCX essential but leave it licensed as
a separate work under the Lesser GPL, it's a non-issue. If relicensing MCX
under the GPL is required we'll need to talk further (and I'm sure RMS
will become involved) as I don't think MERANT plan to make Dimensions open
source in the near future...

Please send me the assignment sample anyway - I'm naturally curious!

Thanks for your time,
Pete.

> ------------------------------------------------------------------------
> -------
>  Paul D. Smith <address@hidden>          Find some GNU make tips at:
>  http://www.gnu.org
> http://www.paulandlesley.org/gmake/
>  "Please remain calm...I may be mad, but I am a professional." --Mad
> Scientist



reply via email to

[Prev in Thread] Current Thread [Next in Thread]