[Top][All Lists]

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: Extended test suites

From: Mike Zraly
Subject: Re: Extended test suites
Date: Fri, 28 Aug 2009 11:17:48 -0400

What I find useful is to have the have a test driver that manages all the
including tracking which tests can be run on a given platform and which
tests are
allowed to fail for a given platform.  The driver can be written in C or
perl or whatever
you expect each build environment to have on hand.  I find perl easiest to
use for this
but C may be more portable.  You can pass configuration data to the driver
to tell it
what platform is being built and what options have been specified using the

For one of my own projects, I wrote a driver in perl that tries to compile,
link, and
execute test programs written in C++.  The driver looked at the name of the
file to
decided if the program should fail to compile, fail to link, fail to run
(exit with return
code other than 0) or succeed all the way.  The TESTS_ENVIRONMENT macro
was set to the name of the driver program, plus some options to pass certain
values, e.g. MAKE, OBJEXT, EXEEXT.  The TESTS macro was set to a list of all
the test programs.

The pain with this approach was that I had to add each test program to the
EXTRA_PROGRAMS macro so that automake would generate rules in the Makefile
to build the program.  The pain was worthwhile though because it allowed the
driver to
compile the programs by issuing 'make' commands.  Otherwise I would have had
to pass the compile and link commands to the driver via TESTS_ENVIRONMENT
and deal with complicated quoting issues.  I tried that first, and gave up.

Another alternative would be to modify your file to generate the
of the TESTS and XFAIL macros at runtime, based on platform and options.
XFAIL would be set to the platform/option-specific set of tests allowed to
and TESTS would be set to the global list MINUS the set in XFAIL.  (Hmm, if
test is listed in both XFAIL and TESTS, does it get run once with an
of failure?)  I don't think generating macro values at runtime is
recommended though.

- Mike

On Thu, Aug 27, 2009 at 12:33 PM, Martin Quinson <address@hidden>wrote:

> Hello,
> I have a project based on automake which comes with about 120 tests. The
> bad thing is that some of the tests sometimes fail, but not always. I
> would like to be able to express the fact that some tests are absolutely
> mandatory to release the product while others are not. I could then say
> that my product is entirely stable on linux/x86, but that some less
> common features are not stable on mac osX for example. The remaining
> bugs on the less common features would not be release-blockers.
> I already know about the XFAIL variable and the possibility to mark that
> some tests are known to fail (and unfortunately, we also have to use
> this). But I'm here speaking of something like a TEST_EXTRA variable and
> a check-extra: make target testing the base tests plus the extra ones.
> I tried to do it with something like the following chunk:
> TESTS?=regular mandatory tests
> TESTS_EXTRA=other, more fragile tests.
> check-extra: $(TESTS_EXTRA)
>        for d in $(SUBDIRS) ; do $(MAKE) check-extra $$d ; done
>        TESTS="$(TESTS) $(TESTS_EXTRA)" $(MAKE) check
> The benefit of this idea would be to reuse automatically the good points
> of the regular check target on my extended set of tests. But it does not
> work because automake seem to not detect that I set the TESTS variable
> if I use the ?= construct instead of =. The check variable is then
> absent from the generated But I need this construct to be
> able to override the list of tests.
> Another approach would be to dupplicate the check target in my code to
> deal with the content of $(TESTS_EXTRA), but it seems rather inelegant
> to me.
> So, here is my questions:
>  * Do you guys think that including a notion of TESTS_EXTRA would be
> interesting for other automake users (I'm not quite sure of this myself)
>  * Could you please change the test detection mecanism so that I could
> override the test list as I wanted to do?
>  * Have other people faced this issue and used a more elegant solution
> than this would TESTS_EXTRA story?
> Thanks for your time, and for this incredibly useful tool.
> Mt.
> --
> A language that doesn't affect the way you think about programming, is
> not worth knowing. -- "Epigrams in Programming", by Alan J. Perlis

reply via email to

[Prev in Thread] Current Thread [Next in Thread]