cons-discuss
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

RE: Modularizing cons


From: Greg Spencer
Subject: RE: Modularizing cons
Date: Fri, 31 Aug 2001 13:49:07 -0600

Thanks for the feedback!

I too was conflicted about which approach to take, as you can see from
the e-mails.  I went with the OOD approach because that is what I know
best, and it's what usually feels "cleanest" to me.

I have the same reservations about the level of detail necessary when
you want to do something special, like reorder the command line args, or
add functionality to a class, or add a new tool.  The up side is that
the changes are usually very self-contained, and won't require knowledge
of the entirety of cons to do (which is generally the case now).  Also,
I think that for the most part, people just want something that does the
"usual" things well, and it's very hard to make a tool that is just as
simple when someone wants to do the unusual -- we can only hope that the
work expended to do the unusual is reusable by others.

For someone who wants to compile code for both Win32 and PocketPC, for
instance (like me :-), it will require a bit of duplication, but
hopefully not much actual work, and once one person does it, it should
be modular enough to plug it into anyone else's build environment.

I can envision a base distribution of cons with tested components for
win32 and Linux, and a contrib section of components and specializations
for other, more esoteric, situations.  A modular approach allows the
contrib stuff to build upon the core, without having to "apply a patch"
to get it to work (well, in most cases).

One feature of the OOD method that I REALLY like is that if you want to
do something new, you are free to specialize a class instead of
inventing the whole thing all over again.  For instance, you could make
it so that a new set of eMbedded C++ compiler objects are actually just
derived the Win32 equivalents (using @ISA), with a couple of overridden
functions. That way, if someone enhances the Win32 objects, the eMbedded
objects get the enhancements for free (probably).

Also, if I want to generate some config value (maybe a moving target,
like a license key, or a version number) on the fly instead of having it
have a static value, it's easier to do that inside of a member function
than it is to create a perlref as an argument and make sure everything
handles that correctly, since the member function has a place where it
can "live", and we don't need lots of special case code everywhere to
make sure that the functions all take arrayrefs, perlrefs, scalars,
etcetera, as arguments.  It also encourages the placement of specialized
code into a separable module, which I think is a good idea.

                                -Greg.

-----Original Message-----
From: Gary Oberbrunner [mailto:address@hidden 
Sent: Friday, August 31, 2001 12:33 PM
To: Greg Spencer; 'cons mailing list'
Subject: RE: Modularizing cons


Nice work so far, Greg!  I took a quick look at it, and it looks like it
can
really succeed in getting the tool-specific stuff out of the base cons.
The
tool-mapping approach is DEFINITELY needed!

I'm conflicted between your OOD approach and the simpler string variable
one.  Your OOD approach is essentially building a set of methods to
access
some of the command line params of a compiler, while leaving others
(defines, warnings, etc.) out.  What if someone wants to reorder their
command line params, or add some in a place we didn't expect?  They have
to
modify Builder::Compiler::VCXX::GetCommand.  That doesn't seem like a
good
idea.  I think the string method is going to be simpler.  But on the
other
hand your OOD approach doesn't have the gross _IFLAGS hack; it stores
them
in a nice internal format and computes them whenever it wants.

Partly it's a question of how much structure cons imposes vs. how simple
it
is for the user has to reorganize things.  If someone uses cons with an
embedded-system compiler, how much work does she have to do to get it
going?
Seems like fooling with a few construction string vars is going to be
simpler than cutting & pasting from an existing compiler object.

Your "Process" approach lets you create a builder for any {src,target}
extension pair, but I'm not sure it gives enough flexibility.  If I want
to
link some of my .objs into exes, but others into a shared lib, I have to
make a separate env to do that if everything goes through Process --
seems
like it's better to have the user specify the destination type
explicitly,
like Program and Library (and SharedLibrary) work now -- they can all
take
objs.  You could keep the Process model by just having its first arg be
the
destination type -- it could be inferred if omitted.  So if I say
Process('.i', @srcs), it'll preprocess them with cpp, if I've set up the
.c->.i and .cxx->.i mappings.  But it might be simpler to encode the
desired
target type into the name, I'm not sure.

Does 'Command' still work the same way in your new scheme?  It has no
API,
right, just a command line and sources, as usual?

OK, I have to get back to work now, even though this is more fun! :-)

-- Gary Oberbrunner




reply via email to

[Prev in Thread] Current Thread [Next in Thread]