automake
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: Automated regression testing‏


From: Ralf Wildenhues
Subject: Re: Automated regression testing‏
Date: Sat, 9 Oct 2010 10:37:23 +0200
User-agent: Mutt/1.5.20 (2010-08-04)

Hi Jef,

* Jef Driesen wrote on Fri, Oct 08, 2010 at 09:37:02PM CEST:
> 1. How do I start and stop my device simulator?
> 
> In order to run the test application, the device simulator needs to
> be running. And afterwards it needs to be shutdown again. How can I
> do this? The device simulator is not a very sophisticated
> application. It basically runs an infinitive loop, and needs to be
> aborted with Ctrl-C. So it's not daemonized in any way. How can I do
> this from the makefiles?

Automake currently provides no before/after hooks for the check targets.
You could override 'check' in a toplevel Makefile.am that does little
other than recursing (and then call check-recursive yourself), or just
use a different target name:

test:
        start simulator & \
        if $(MAKE) check; then st=0; else st=$$?; fi; \
        kill %1; \
        exit $$st

This rule is not 'make -n' safe.  The if construct is to avoid exiting
prematurely even with makes that use 'sh -ec' to execute rules (demanded
by the newest Posix).

> 2. How do I run multiple tests with only a single application?
> 
> My test application can test several scenario's but only one at a
> time. To test each possible scenario, the test application has to be
> run multiple times, but with different options. Or with the same
> options, but with a different data file loaded in the simulator.

The easiest is to just have one script that runs them all, but then you
get only one 'PASS: script' result line afterwards.  For more, you can
have one two-line script per set of options, that calls your application
with the appropriate options.

> 3. How do I detect success/failure based on the output of a test
> application?
> 
> I mainly want to run regression testing on the parsing of the data.
> The test application writes the parsed data to some file, and then I
> want to compare the output with some ground thruth data. If they are
> not identical the test should fail. So it's not just the exit code
> that I want to capture.

The script or scripts could cmp or diff (depending on whether you want
binary identical text, or ignore newline encoding) output of programs
they start to expected output.  Common parts/functions of multiple
scripts can be put in a scriptlet that is sourced by all.

The test suite for Automake itself does a few of these things.

Arguably, if above strategy requires you to do a lot of repetition, then
maybe the Autotest framework (from Autoconf) might be a better choice
for you.  There you can use m4 to do all the repetition for you.

Cheers,
Ralf



reply via email to

[Prev in Thread] Current Thread [Next in Thread]