autoconf-patches
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: Autotest: loops of tests


From: Noah Misch
Subject: Re: Autotest: loops of tests
Date: Tue, 16 Aug 2005 12:23:49 -0700
User-agent: Mutt/1.5.6i

On Tue, Aug 16, 2005 at 08:05:36AM +0200, Ralf Wildenhues wrote:
> * Noah Misch wrote on Tue, Aug 16, 2005 at 12:10:32AM CEST:
> > On Mon, Aug 15, 2005 at 06:26:29PM +0200, Ralf Wildenhues wrote:
> > > Currently, AT_CHECK(cmd, ignore, , , fail_commands)
> > > is somewhat superfluous, as fail_commands will never get executed
> > > because we ignore the return value.  The patch below changes that.
> > > I regard this change in semantics as ok because of the nonsensical
> > > behavior this combination had before, IMVHO.
> > 
> > AFAICS, fail_commands would run if cmd has a non-empty stdout or stderr.
> 
> You mean, if $3 or $4 of AT_CHECK contain the expected output, right?

Yes; in your example, the expected output is empty.  If you need an AT_CHECK
variant whose failure does not stop the test case, that should be a different
macro.  Let's not overload `AT_CHECK(cmd,ignore,ignore,ignore,...)'.

Perhaps we want a macro, say `AT_CLEANUP_ON_FAILURE(0|1)', that makes failing
`AT_CHECK's terminal or nonterminal until the end of the current test case.

> > AT_CHECK([test -z "$i" || echo "$i" >failures], ignore, ignore, ignore)
> 
> That is not allowed:
> 
> |  -- Macro: AT_CHECK (COMMANDS, [STATUS = ``0''], [STDOUT = ``''],
> |           [STDERR = ``''], [RUN-IF-FAIL], [RUN-IF-PASS])
> *snip*
> | 
> |      The COMMANDS _must not_ redirect the standard output, nor the
> |      standard error.

The manual makes this enjoinder on account of a bug in the Ultrix shell, which
is all but extinct.  Even so, you could avoid the problem with this:

  for ...; do
    AT_CHECK([test -z "$i" || touch failures], ignore, ignore, ignore)
  done
  AT_CHECK([test -f failures], 1)

> Otherwise, I'd guess this would be ok.  But I also like the feature of
> allowing files to be captured the way I proposed.  Obviates the need to
> write even more of my own test suite inside autotest.  :)

Rather, we should log the captured files regardless of the test outcome, just
like we already log stdout and stderr.

> > This does look like a sound technique for testing many variations of a 
> > command.
> 
> Exactly.  So, does it have a chance of ending up in Autotest?

I would welcome an addition to the manual describing this approach.  If you have
a wrapper macro in mind, we could look at that, too.




reply via email to

[Prev in Thread] Current Thread [Next in Thread]