cons-discuss
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: Remark 2: Desired feature: Conscript-uptodate targets


From: Steven Knight
Subject: Re: Remark 2: Desired feature: Conscript-uptodate targets
Date: Thu, 30 Nov 2000 16:41:22 -0600 (CST)

On Thu, 30 Nov 2000, Zachary Deretsky wrote:
> > For each regular Program target (or maybe Install target, depending on
> > where you want to run the tests from), use AfterBuild to AddTarget the
> > test's log file.  Then either statically (in the Conscript) or
> > dynamically (in the AfterBuild code) add a RunRegressionTest method
> > for each test to be run, producing the log file you just AddTarget'ed.
> > 
> > AddTarget'ed targets don't get looked at until all the regular
> > (statically defined) targets are built, so your regression tests will
> > all be run after the commands are built.  Of course this is really a
> > semi-bogus dependency, since you could in theory run regression test 1
> > as soon as program 1 is built.
> 
> The projects that I currently have do not consist of one Program target,
> they contain libraries, executables, message files, gif files, etc.
> So, I really need everything in the project to be built and installed,
> before I test it.

In other words, *every* test to be run depends on *every* product being
built first?  If your system is at all modular (as it sounds like yours
is), this is rarely the case.  Different tests are probably testing
different parts of the overall system.  If so, then the results of
each test depend on the test script itself AND the parts of the system
the script is testing.

In other words, if you update one library (for example), you only want
to re-run the tests that depend on that library, right?  And if a given
test only depends on one library, there should be no harm in running
it before some other program exists.  If there is a problem, then it
implies that the test results depend on the library AND the program,
and both should be dependencies of the test results.

> I can achieve the result I am looking for by adding a special target,
> depending on all other local targets, myself into every Conscript file.

But if the targets and tests are local to every Conscript file, then
non-local targets need not be built before the local tests are run.
You just need the local targets.  In other words, you can build and
test subsystem by subsystem, not by separate, global passes of "build
everything" and "test everything" (the Make model).

> This will add complexity and raise maintenence costs, so, I would
> prefer cons providing this feature.
> 
> It is a philosophical issue: cons makes everything 'flat' by removing
> the sibling walls for the efficiency of compilation.
> 
> But humans do not think about a large project as 'flat', they
> partition it into more-or-less self-contained sub-projects.

But the Cons flattening of the dependency analysis doesn't imply anything
about how you partition the project into sub-projects.  Looked at another
way:  you say that every target must be built before any test can be run.
That's a flat dependency hierarchy, too.

If you make the tests depend only on the components they're testing,
you'd be creating a very flexible, hierarchical testing infrastructure,
too, regardless of the way in which Cons or Make each flatten their
dependency analyses.

> Skyscrapers are not built from bricks...
> 
> The tool, which adapts to the human way of thinking,is easier to use,
> methinks, and "Conscript-uptodate" targets would provide convenience
> without loss of efficiency.
> 
> I wonder if anybody else thinks the same?

Sure, most people who make the transition from Make to Cons start out
thinking this way.  I think you're caught in this trap.

The point is, tests (or, more specifically, their results) *do* have
dependencies on the specific targets that they're designed to test.
The way people normally use Make (separate "build" and "test" passes)
lets us ignore these dependencies because the tests don't happen until
after all the builds are complete.

Ignoring these dependencies (the Make model) is seductive, because you
don't have to pay attention to how your tests correspond to the targets.
But you end up with a far more flexible build+test infrastructure if
you actually set up these dependencies and let Cons do the right work.
I've done this for almost all of my projects by capturing the test
output into one or more log files.  (The Conscript files used to build,
test and package Cons itself are written this way, for example, although
that was mostly Rajesh' work.)

The easiest way is to use a single test output file, something like:

        Program $env 'foo.exe', @foo_sources;
        Library $env 'bar.lib', @bar_sources;
        Command $env 'xxx.gif', 'gif.input', "generate_gif %< > %>";


        Command $env 'test.out', @tests, qq(
                run_tests %< > %>
        );
        Depends $env 'test.out', qw( foo.exe bar.lib xxx.gif );

But you can obviously use the same technique to get as fine-grained as
you like by creating appropriate dependency mappings (and output files)
for individual tests.  If you do this, builds end up being more effective
because you can change a source file OR a test and have just the right
tests re-executed, all without separate passes through the hierarchy.

(Hey, how'd I end up here on this soapbox...?)

        --SK
        --SK




reply via email to

[Prev in Thread] Current Thread [Next in Thread]