automake-patches
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [PATCH] {GSoC} parallel-tests: simplify testsuite summary


From: Ralf Wildenhues
Subject: Re: [PATCH] {GSoC} parallel-tests: simplify testsuite summary
Date: Wed, 06 Jul 2011 08:24:38 +0200

Hi Stefano,

* Stefano Lattarini wrote on Sun, Jul 03, 2011 at 03:31:22PM CEST:
> Prefer a more deterministic, "tabular" format for the testsuite
> summary, always listing the numbers of passed, failed, xfailed,
> xpassed, skipped and errored tests, even when these numbers are
> zero.  This simplify the logic of testsuite summary creation,
> makes it more easily machine-parseable, and will probably allow
> for easier addition of new kinds of test results in the future.
> 
> Applied to the temporary branch 'GSoC/experimental/test-results-work'.

This looks a bit verbose to me, also IIUC it introduces yet another
format that is different from our previous summary and different from
the summaries of any other testsuite environments.  Any chance we can
lessen the NIH?

I've rejected a reformatting of the results before (see the list
archives for details).  I'm not totally opposed, if there is a real
advantage /for the user/, and if things are changing a lot anyway,
but we should definitely change into something that we think can be
stable for several years.  This is end-user API, a lot of stuff that's
not even using Automake (but only running a testsuite of a package that
uses Automake) can be relying on this.  IOW, things like GSRC, the
autobuild parser, or whatnot else (that we don't have any control over)
that greps our output, and now will have to maintain yet another syntax
forever (since of course the old syntax won't die in the next several
years).

> Subject: [PATCH] parallel-tests: simplify testsuite summary
> 
> Prefer a more deterministic, "tabular" format for the testsuite
> summary, always listing the numbers of passed, failed, xfailed,
> xpassed, skipped and errored tests, even when these numbers are
> zero.  This simplify the logic of testsuite summary creation,
> makes it more easily machine-parseable, and will probably allow
> for easier addition of new kinds of test results.
> 
> * lib/am/check.am (am__tty_colors_dummy): New make variable, to
> reduce code duplication.  Extracted from previous versions of
> $(am__tty_colors), and extended by defining two new variables
> `$mgn' and `$brg'.
> [%?COLOR%, %!?COLOR%] (am__tty_colors): Use that new variable.
> (am__text_box): Delete, is not needed anymore.
> ($(TEST_SUITE_LOG)): Rewrite associated rules to implement the
> new testsuite summary format.
> * NEWS: Update.
> * tests/check10.test: Don't run with the parallel-tests harness
> too, that makes no sense anymore.
> * tests/color.test: Update and adjust.
> * tests/color2.test: Likewise.
> * tests/parallel-tests.test: Likewise.
> * tests/parallel-tests6.test: Likewise.
> * tests/parallel-tests9.test: Likewise.
> * tests/parallel-tests-unreadable-log.test: Likewise.
> * tests/parallel-tests-empty-testlogs.test: Likewise.
> * tests/parallel-tests-log-override-recheck.test: Likewise.
> * tests/parallel-tests-no-spurious-summary.test: Likewise.
> * tests/test-driver-custom-multitest.test: Likewise.
> * tests/test-driver-end-test-results.test: Likewise.
> * tests/parallel-tests-no-color-in-log.test: New test.
> * tests/testsuite-summary-color.test: Likewise.
> * tests/testsuite-summary-count.test: Likewise.
> * tests/testsuite-summary-count-many.test: Likewise.
> * tests/testsuite-summary-reference-log.test: Likewise.
> * tests/testsuite-summary-checks.sh: New auxiliary script, used
> by the new tests above.
> * tests/extract-testsuite-summary: Likewise.
> * tests/trivial-test-driver: Optimize for speed when there are
> lots of of tests.
> * tests/Makefile.am (EXTRA_DIST): Distribute them.
> (testsuite-summary-color.log, testsuite-summary-count.log): Depend
> on them.
> (testsuite-summary-count-many.log): Depend on the auxiliary scripts
> 'trivial-test-driver' and 'extract-testsuite-summary'.
> (TESTS): Update.



> +## When writing the test summary to the console, we want to color a line
> +## reporting the count of a kind of result *only* if at least one test

s/a kind of/some/

> +## experienced such a result.  This function is handy in this regard.
> +     result_count () \
> +     { \
> +         if test x"$$1" = x"--color"; then \
> +           colorize=yes; \
> +         elif test x"$$1" = x"--no-color"; then \

What if output doesn't go to a terminal?  Does --no-color get set then,
or colorize?

> +           colorize=no; \
> +         else \
> +           echo "$@: invalid 'result_count' usage" >&2; exit 4; \
> +         fi; \
> +         shift; \
> +         desc=$$1 count=$$2; \
> +         if test $$colorize = yes && test $$count -gt 0; then \
> +           color_start=$$3 color_end=$$std; \
> +         else \
> +           color_start= color_end=; \
> +         fi; \
> +         echo "$${color_start}# $$desc $$count$${color_end}"; \
> +     }; \
> +## A shell function that creates the testsuite summary.  We need it
> +## because we have to create *two* summaries, one for test-suite.log,
> +## and a possibly-colorized one for console output.
> +     create_testsuite_report () \
> +     { \
> +       result_count $$1 "tests:" $$all   "$$brg"; \
> +       result_count $$1 "pass: " $$pass  "$$grn"; \
> +       result_count $$1 "skip: " $$skip  "$$blu"; \
> +       result_count $$1 "xfail:" $$xfail "$$lgn"; \
> +       result_count $$1 "fail: " $$fail  "$$red"; \
> +       result_count $$1 "xpass:" $$xpass "$$red"; \
> +       result_count $$1 "error:" $$error "$$mgn"; \

You don't want to color the actual numbers nor the 'fail:'?

Why the shift in capitalization?  I know this is totally bike shedding,
but see above for rationale.

>  ## Write "global" testsuite log.
>       {                                                               \
>         echo "$(PACKAGE_STRING): $(subdir)/$(TEST_SUITE_LOG)" |       \
>           $(am__rst_title);                                           \
> -       echo "$$msg";                                                 \
> +       create_testsuite_report --no-color;                           \
>         echo;                                                         \
>         echo ".. contents:: :depth: 2";                               \
>         echo;                                                         \
> @@ -243,22 +236,29 @@ $(TEST_SUITE_LOG): $(TEST_LOGS)
>         done;                                                         \
>       } >$(TEST_SUITE_LOG).tmp;                                       \
>       mv $(TEST_SUITE_LOG).tmp $(TEST_SUITE_LOG);                     \
> -     if test "$$failures" -ne 0; then                                \
> -       msg="$${msg}See $(subdir)/$(TEST_SUITE_LOG).  ";              \
> -       if test -n "$(PACKAGE_BUGREPORT)"; then                       \
> -         msg="$${msg}Please report to $(PACKAGE_BUGREPORT).  ";      \
> -       fi;                                                           \
> -     fi;                                                             \
> -     test x"$$VERBOSE" = x || $$exit || cat $(TEST_SUITE_LOG);       \
> -## Emit the test summary on the console, and exit.
> -     $(am__tty_colors);                                              \
> -     if $$exit; then                                                 \
> +## Emit the test summary on the console.
> +     if $$success; then                                              \
>         col="$$grn";                                                  \
>        else                                                           \
>         col="$$red";                                                  \
> +       test x"$$VERBOSE" = x || cat $(TEST_SUITE_LOG);               \
> +     fi;                                                             \
> +## Multi line coloring is problematic with "less -R", so we really need
> +## to color each line individually.
> +     echo "$${col}$$br$${std}";                                      \
> +     echo "$${col}Testsuite summary for $(PACKAGE_STRING)$${std}";   \
> +     echo "$${col}$$br$${std}";                                      \
> +     create_testsuite_report --color;                                \
> +     echo "$$col$$br$$std";                                          \
> +     if $$success; then :; else                                      \
> +       echo "$${col}See $(subdir)/$(TEST_SUITE_LOG)$${std}";         \
> +       if test -n "$(PACKAGE_BUGREPORT)"; then                       \
> +         echo "$${col}Please report to $(PACKAGE_BUGREPORT)$${std}"; \
> +       fi;                                                           \
> +       echo "$$col$$br$$std";                                        \
>       fi;                                                             \
> -     echo "$$msg" | $(am__text_box) "col=$$col" "std=$$std";         \
> -     $$exit
> +## Be sure to exit with the proper exit status.
> +     $$success

Need to test this on Tru64 and OpenBSD shells.  We've been burned with
false negatives before.

> --- /dev/null
> +++ b/tests/extract-testsuite-summary
> @@ -0,0 +1,15 @@
> +#! /usr/bin/env perl

Is it necessary to add another perl script?  If yes, why use a different
portability mechanism than elsewhere in the code base?

> +# Extract the testsuite summary generated by the parallel-tests harness
> +# from the output of "make check".
> +
> +use warnings FATAL => 'all';
> +use strict;
> +
> +my $br = '=' x 76;
> +my @sections = ('');
> +while (<>)
> +  {
> +    push @sections, $_, '' if /$br/;
> +    $sections[-1] .= $_ if !/$br/;
> +  }
> +print @sections[1..$#sections-1];


> --- /dev/null
> +++ b/tests/parallel-tests-no-color-in-log.test
> @@ -0,0 +1,66 @@

> +# Check the testsuite summary with the parallel-tests harness.  This
> +# script is meant to be sourced by other test script, so that it can

s/meant to be /

> +# be used to check different scenarios (color and non-color tests
> +
> +# Colorized output from testsuite report shouldn't end up into log files.

s/from/& the/
s/into/in/

> +parallel_tests=yes
> +. ./defs || Exit 1
> +
> +esc=''
> +
> +# Check that grep can parse nonprinting characters.
> +# BSD 'grep' works from a pipe, but not a seekable file.
> +# GNU or BSD 'grep -a' works on files, but is not portable.

factor into defs?

> +case `echo "$esc" | $FGREP "$esc"` in
> +  "$esc") ;;
> +  *) echo "$me: fgrep can't parse nonprinting characters" >&2; Exit 77;;
> +esac
> +
> +TERM=ansi; export TERM
> +
> +cat >>configure.in <<END
> +AC_OUTPUT
> +END
> +
> +cat >Makefile.am <<'END'
> +LOG_COMPILER = $(SHELL)
> +AUTOMAKE_OPTIONS = color-tests parallel-tests
> +TESTS = pass fail skip xpass xfail error
> +XFAIL_TESTS = xpass xfail
> +END
> +
> +# FIXME: creative quoting to please maintainer-check.
> +echo exit '0' > pass
> +echo exit '0' > xpass
> +echo exit '1' > fail
> +echo exit '1' > xfail
> +echo exit '77' > skip
> +echo exit '99' > error
> +
> +$ACLOCAL
> +$AUTOCONF
> +$AUTOMAKE --add-missing
> +
> +./configure
> +mv config.log config-log # Avoid possible false positives below.
> +AM_COLOR_TESTS=always $MAKE -e check && Exit 1
> +$FGREP "$esc" *.log && Exit 1


> --- /dev/null
> +++ b/tests/testsuite-summary-checks.sh
> @@ -0,0 +1,106 @@

> +# Check the testsuite summary with the parallel-tests harness.  This
> +# script is meant to be sourced by other test script, so that it can
> +# be used to check different scenarios (colorized and non-colorized
> +# testsuite output, packages with and without bug-report addresses,
> +# test scripts in subdirectories, ...)

> +# Quite complexish, but allow the tests in client scripts to be written
> +# in a "data-driven fashion".

s/ish//

> +do_check ()
> +{
> +  cat > summary.exp
> +  expect_failure=false
> +  xfail_tests=''
> +  tests="TESTS='$*'"
> +  for t in $*; do
> +    case $t in fail*|xpass*|error*) expect_failure=:;; esac
> +    case $t in xfail*|xpass*) xfail_tests="$xfail_tests $t";; esac
> +  done
> +  test -z "$xfail_tests" || xfail_tests="XFAIL_TESTS='$xfail_tests'"
> +  st=0
> +  eval "env $tests $xfail_tests \$MAKE -e check > stdout || st=\$?"
> +  cat stdout
> +  if $expect_failure; then
> +    test $st -gt 0 || Exit 1
> +  else
> +    test $st -eq 0 || Exit 1
> +  fi
> +  $PERL -w "$testsrcdir"/extract-testsuite-summary stdout > summary.got \
> +   || fatal_ "cannot extract testsuite summary"
> +  cat summary.exp
> +  cat summary.got
> +  if test $use_colors = yes; then
> +    # Use cmp, not diff, because the files might contain binary data.
> +    compare=cmp
> +  else
> +    compare=diff
> +  fi
> +  $compare summary.exp summary.got || Exit 1
> +}
> +
> +br='============================================================================'
> +
> +$ACLOCAL
> +$AUTOCONF
> +$AUTOMAKE --add-missing

> --- /dev/null
> +++ b/tests/testsuite-summary-color.test

> +# Check colorization of the testsuite summary.

coloring?

> +. ./defs-static || Exit 1
> +
> +use_colors=yes
> +use_vpath=no
> +
> +. "$testsrcdir"/testsuite-summary-checks.sh || Exit 99
> +
> +./configure
> +
> +# ANSI colors.
> +red='[0;31m'
> +grn='[0;32m'
> +lgn='[1;32m'
> +blu='[1;34m'
> +mgn='[0;35m'
> +brg='[1m';
> +std='[m';
> +
> +success_header="\
> +${grn}${br}${std}
> +${grn}Testsuite summary for GNU AutoFoo 7.1${std}
> +${grn}${br}${std}"
> +
> +success_footer=${grn}${br}${std}
> +
> +failure_header="\
> +${red}${br}${std}
> +${red}Testsuite summary for GNU AutoFoo 7.1${std}
> +${red}${br}${std}"
> +
> +failure_footer="\
> +${red}${br}${std}
> +${red}See ./test-suite.log${std}
> +${red}Please report to address@hidden
> +${red}${br}${std}"
> +
> +do_check '' <<END
> +$success_header
> +# tests: 0
> +# pass:  0
> +# skip:  0
> +# xfail: 0
> +# fail:  0
> +# xpass: 0
> +# error: 0
> +$success_footer
> +END
> +
> +do_check pass.t <<END
> +$success_header
> +${brg}# tests: 1${std}
> +${grn}# pass:  1${std}
> +# skip:  0
> +# xfail: 0
> +# fail:  0
> +# xpass: 0
> +# error: 0
> +$success_footer
> +END
> +
> +do_check skip.t <<END
> +$success_header
> +${brg}# tests: 1${std}
> +# pass:  0
> +${blu}# skip:  1${std}
> +# xfail: 0
> +# fail:  0
> +# xpass: 0
> +# error: 0
> +$success_footer
> +END
> +
> +do_check xfail.t <<END
> +$success_header
> +${brg}# tests: 1${std}
> +# pass:  0
> +# skip:  0
> +${lgn}# xfail: 1${std}
> +# fail:  0
> +# xpass: 0
> +# error: 0
> +$success_footer
> +END
> +
> +do_check fail.t <<END
> +$failure_header
> +${brg}# tests: 1${std}
> +# pass:  0
> +# skip:  0
> +# xfail: 0
> +${red}# fail:  1${std}
> +# xpass: 0
> +# error: 0
> +$failure_footer
> +END
> +
> +do_check xpass.t <<END
> +$failure_header
> +${brg}# tests: 1${std}
> +# pass:  0
> +# skip:  0
> +# xfail: 0
> +# fail:  0
> +${red}# xpass: 1${std}
> +# error: 0
> +$failure_footer
> +END
> +
> +do_check error.t <<END
> +$failure_header
> +${brg}# tests: 1${std}
> +# pass:  0
> +# skip:  0
> +# xfail: 0
> +# fail:  0
> +# xpass: 0
> +${mgn}# error: 1${std}
> +$failure_footer
> +END
> +
> +do_check pass.t xfail.t skip.t <<END
> +$success_header
> +${brg}# tests: 3${std}
> +${grn}# pass:  1${std}
> +${blu}# skip:  1${std}
> +${lgn}# xfail: 1${std}
> +# fail:  0
> +# xpass: 0
> +# error: 0
> +$success_footer
> +END
> +
> +do_check pass.t fail.t skip.t <<END
> +$failure_header
> +${brg}# tests: 3${std}
> +${grn}# pass:  1${std}
> +${blu}# skip:  1${std}
> +# xfail: 0
> +${red}# fail:  1${std}
> +# xpass: 0
> +# error: 0
> +$failure_footer
> +END
> +
> +do_check pass.t xfail.t xpass.t <<END
> +$failure_header
> +${brg}# tests: 3${std}
> +${grn}# pass:  1${std}
> +# skip:  0
> +${lgn}# xfail: 1${std}
> +# fail:  0
> +${red}# xpass: 1${std}
> +# error: 0
> +$failure_footer
> +END
> +
> +do_check skip.t xfail.t error.t <<END
> +$failure_header
> +${brg}# tests: 3${std}
> +# pass:  0
> +${blu}# skip:  1${std}
> +${lgn}# xfail: 1${std}
> +# fail:  0
> +# xpass: 0
> +${mgn}# error: 1${std}
> +$failure_footer
> +END
> +
> +do_check pass.t skip.t xfail.t fail.t xpass.t error.t <<END
> +$failure_header
> +${brg}# tests: 6${std}
> +${grn}# pass:  1${std}
> +${blu}# skip:  1${std}
> +${lgn}# xfail: 1${std}
> +${red}# fail:  1${std}
> +${red}# xpass: 1${std}
> +${mgn}# error: 1${std}
> +$failure_footer
> +END

> --- /dev/null
> +++ b/tests/testsuite-summary-count-many.test

> +# Check test counts in the testsuite summary, with test drivers allowing
> +# multiple test results per test script, and for a huge number of tests.
> +# Incidentally, this test also checks that the testsuite summary doesn't
> +# give any bug-report address if it's not defined.
> +
> +parallel_tests=yes
> +. ./defs || Exit 1
> +
> +for s in trivial-test-driver extract-testsuite-summary; do
> +  cp "$testsrcdir/$s" . || fatal_ "failed to fetch auxiliary script $s"
> +done
> +
> +br='============================================================================'
> +
> +header="\
> +${br}
> +Testsuite summary for $me 1.0
> +${br}"
> +
> +footer="\
> +${br}
> +See ./test-suite.log
> +${br}"
> +
> +echo AC_OUTPUT >> configure.in
> +
> +cat > Makefile.am << 'END'
> +TEST_LOG_DRIVER = $(SHELL) $(srcdir)/trivial-test-driver
> +TESTS = all.test
> +# Without this, the test driver will be horrendously slow.
> +END
> +
> +cat > all.test <<'END'
> +#!/bin/sh
> +cat results.txt || { echo ERROR: weird; exit 99; }
> +END
> +chmod a+x all.test
> +
> +$PERL -w -e '
> +  use warnings FATAL => "all";
> +  use strict;
> +
> +  # FIXME: we would like this to be 1000 or even 10000, but the current
> +  # implementation is too slow to handle that :-(
> +  my $base = 5;
> +  my %count = (
> +    tests => $base * 1000,
> +    pass  => $base * 700,
> +    skip  => $base * 200,
> +    xfail => $base * 80,
> +    fail  => $base * 10,
> +    xpass => $base * 7,
> +    error => $base * 3,
> +  );
> +  my @results = qw/pass skip xfail fail xpass error/;
> +
> +  open (RES, ">results.txt") or die "opening results.txt: $!\n";
> +  open (CNT, ">count.txt") or die "opening count.txt: $!\n";
> +
> +  printf CNT "# %-6s %d\n", "tests:", $count{tests};
> +  for my $res (@results)
> +    {
> +      my $uc_res = uc $res;
> +      print STDERR "Generating list of $res ...\n";
> +      for (1..$count{$res})
> +        {
> +          print RES "$uc_res: $_\n";
> +        }
> +      printf CNT "# %-6s %d\n", $res . ":", $count{$res};
> +    }
> +'
> +
> +(echo "$header" && cat count.txt && echo "$footer") > summary.exp
> +
> +$ACLOCAL
> +$AUTOMAKE -a
> +$AUTOCONF
> +
> +./configure
> +
> +($MAKE check || : > make.fail) | tee stdout
> +test -f make.fail
> +
> +$PERL -w extract-testsuite-summary stdout > summary.got
> +cat summary.exp
> +cat summary.got
> +diff summary.exp summary.got || Exit 1

> --- /dev/null
> +++ b/tests/testsuite-summary-count.test
> @@ -0,0 +1,176 @@

> +# Check test counts in the testsuite summary.
> +
> +. ./defs-static || Exit 1
> +
> +use_colors=no
> +use_vpath=no
> +
> +. "$testsrcdir"/testsuite-summary-checks.sh || Exit 99
> +
> +seq_ ()
> +{
> +  case $# in
> +   2) l=$1 u=$2;;
> +   *) fatal_ "incorrect usage of 'seq_' function";;
> +  esac
> +  seq $1 $2 || {
> +    i=$l
> +    while test $i -le $u; do
> +      echo $i
> +      i=`expr $i + 1`
> +    done
> +  }
> +}
> +
> +./configure
> +
> +header="\
> +${br}
> +Testsuite summary for GNU AutoFoo 7.1
> +${br}"
> +
> +success_footer=${br}
> +
> +failure_footer="\
> +${br}
> +See ./test-suite.log
> +Please report to address@hidden

Please don't list bad but valid email addresses.  Either use
bug-autoconf or something in the example.com domain.  These addresses
otherwise get picked up from the mail archives by spammers.

> +${br}"
> +
> +# Corner cases.
> +
> +do_check '' <<END
> +$header
> +# tests: 0
> +# pass:  0
> +# skip:  0
> +# xfail: 0
> +# fail:  0
> +# xpass: 0
> +# error: 0
> +$success_footer
> +END
> +
> +do_check pass.t <<END
> +$header
> +# tests: 1
> +# pass:  1
> +# skip:  0
> +# xfail: 0
> +# fail:  0
> +# xpass: 0
> +# error: 0
> +$success_footer
> +END
> +
> +do_check fail.t <<END
> +$header
> +# tests: 1
> +# pass:  0
> +# skip:  0
> +# xfail: 0
> +# fail:  1
> +# xpass: 0
> +# error: 0
> +$failure_footer
> +END
> +
> +# Some simpler checks, with low or moderate number of tests.
> +
> +do_check skip.t skip2.t skip3.t xfail.t xfail2.t <<END
> +$header
> +# tests: 5
> +# pass:  0
> +# skip:  3
> +# xfail: 2
> +# fail:  0
> +# xpass: 0
> +# error: 0
> +$success_footer
> +END
> +
> +do_check pass.t pass2.t xfail.t xpass.t error.t error2.t <<END
> +$header
> +# tests: 6
> +# pass:  2
> +# skip:  0
> +# xfail: 1
> +# fail:  0
> +# xpass: 1
> +# error: 2
> +$failure_footer
> +END
> +
> +pass_count=22
> +skip_count=19
> +xfail_count=21
> +fail_count=18
> +xpass_count=23
> +error_count=17
> +tests_count=120
> +
> +pass=` seq_ 1 $pass_count  | sed 's/.*/pass-&.t/'`
> +skip=` seq_ 1 $skip_count  | sed 's/.*/skip-&.t/'`
> +xfail=`seq_ 1 $xfail_count | sed 's/.*/xfail-&.t/'`
> +fail=` seq_ 1 $fail_count  | sed 's/.*/fail-&.t/'`
> +xpass=`seq_ 1 $xpass_count | sed 's/.*/xpass-&.t/'`
> +error=`seq_ 1 $error_count | sed 's/.*/error-&.t/'`
> +
> +do_check $pass $skip $xfail $fail $xpass $error <<END
> +$header
> +# tests: $tests_count
> +# pass:  $pass_count
> +# skip:  $skip_count
> +# xfail: $xfail_count
> +# fail:  $fail_count
> +# xpass: $xpass_count
> +# error: $error_count
> +$failure_footer
> +END
> +
> +# Mild stress test with a lot of test scripts.
> +
> +tests_count=1888
> +pass_count=1403
> +skip_count=292
> +xfail_count=41
> +fail_count=126
> +xpass_count=17
> +error_count=9
> +
> +pass=` seq_ 1 $pass_count  | sed 's/.*/pass-&.t/'`
> +skip=` seq_ 1 $skip_count  | sed 's/.*/skip-&.t/'`
> +xfail=`seq_ 1 $xfail_count | sed 's/.*/xfail-&.t/'`
> +fail=` seq_ 1 $fail_count  | sed 's/.*/fail-&.t/'`
> +xpass=`seq_ 1 $xpass_count | sed 's/.*/xpass-&.t/'`
> +error=`seq_ 1 $error_count | sed 's/.*/error-&.t/'`
> +
> +do_check $pass $skip $xfail $fail $xpass $error <<END
> +$header
> +# tests: $tests_count
> +# pass:  $pass_count
> +# skip:  $skip_count
> +# xfail: $xfail_count
> +# fail:  $fail_count
> +# xpass: $xpass_count
> +# error: $error_count
> +$failure_footer
> +END

> --- /dev/null
> +++ b/tests/testsuite-summary-reference-log.test
> @@ -0,0 +1,88 @@

> +# Check that the global testsuite log file referenced in the testsuite
> +# summary and in the global testsuite log itself is correct.
> +
> +parallel_tests=yes
> +. ./defs || Exit 1
> +
> +mv configure.in configure.stub
> +
> +cat > fail << 'END'
> +#!/bin/sh
> +exit 1
> +END
> +chmod a+x fail
> +
> +cat configure.stub - > configure.in <<'END'
> +AC_OUTPUT
> +END
> +
> +cat > Makefile.am << 'END'
> +TEST_SUITE_LOG = my_test_suite.log
> +TESTS = fail
> +END
> +
> +$ACLOCAL
> +$AUTOCONF
> +$AUTOMAKE -a
> +
> +mkdir build
> +cd build
> +
> +../configure
> +
> +$MAKE check >stdout && { cat stdout; Exit 1; }
> +cat stdout
> +grep '^See \./my_test_suite\.log$' stdout
> +
> +mkdir bar
> +TEST_SUITE_LOG=bar/bar.log $MAKE -e check >stdout && { cat stdout; Exit 1; }
> +cat stdout
> +grep '^See \./bar/bar\.log$' stdout
> +
> +cd ..
> +
> +echo SUBDIRS = sub > Makefile.am
> +mkdir sub
> +echo TESTS = fail > sub/Makefile.am
> +mv fail sub
> +
> +cat configure.stub - > configure.in <<'END'
> +AC_CONFIG_FILES([sub/Makefile])
> +AC_OUTPUT
> +END
> +
> +$ACLOCAL --force
> +$AUTOCONF --force
> +$AUTOMAKE
> +
> +./configure
> +$MAKE check >stdout && { cat stdout; Exit 1; }
> +cat stdout
> +grep '^See sub/test-suite\.log$' stdout
> +cd sub
> +$MAKE check >stdout && { cat stdout; Exit 1; }
> +cat stdout
> +grep '^See sub/test-suite\.log$' stdout
> +cd ..
> +
> +TEST_SUITE_LOG=foo.log $MAKE -e check >stdout && { cat stdout; Exit 1; }
> +cat stdout
> +grep '^See sub/foo\.log$' stdout
> +
> +:
> diff --git a/tests/trivial-test-driver b/tests/trivial-test-driver
> index 113e158..4b43506 100644
> --- a/tests/trivial-test-driver
> +++ b/tests/trivial-test-driver
> @@ -54,30 +54,37 @@ done
>  
>  ## Run the test script, get test cases results, display them on console.
>  
> -tmp_out=$log_file-out.tmp
> -tmp_res=$log_file-res.tmp
> +tmp_output=$log_file-output.tmp
> +tmp_results=$log_file-results.tmp
> +tmp_status=$log_file-status.tmp
>  
> -"$@" 2>&1 | tee $tmp_out | (
> +"$@" 2>&1 | tee $tmp_output | (
>    i=0 st=0
> -  : > $tmp_res
> +  exec 5> $tmp_results
> +  : > $tmp_status
>    while read line; do
> +    result=
>      case $line in
> -      PASS:*|FAIL:*|XPASS:*|XFAIL:*|SKIP:*|ERROR:*)
> -        i=`expr $i + 1`
> -        result=`LC_ALL=C expr "$line" : '\([A-Z]*\):.*'`
> -        case $result in FAIL|XPASS|ERROR) st=1;; esac
> -        # Output testcase result to console.
> -        echo "$result: $test_name, testcase $i"
> -        # Register testcase outcome for the log file.
> -        echo ":test-result: $line" >> $tmp_res
> -        echo >> $tmp_res
> -        ;;
> +      PASS:*)  result=PASS  ;;
> +      FAIL:*)  result=FAIL  ;;
> +      XPASS:*) result=XPASS ;;
> +      XFAIL:*) result=XFAIL ;;
> +      SKIP:*)  result=SKIP  ;;
> +      ERROR:*) result=ERROR ;;
>      esac
> +    if test -n "$result"; then
> +      case $result in FAIL|XPASS|ERROR) st=1;; esac
> +      # Output testcase result to console.
> +      echo "$result: $test_name"
> +      # Register testcase outcome for the log file.
> +      echo ":test-result: $line" >&5
> +      echo >&5
> +    fi
>    done
> -  exit $st
> -)
> +  test $st -eq 0 || echo fail > $tmp_status
> +) | awk '{ print $0 ", testcase " NR }'
>  
> -if test $? -eq 0; then
> +if test ! -s $tmp_status; then
>    global_result=PASS
>  else
>    global_result=FAIL
> @@ -89,13 +96,13 @@ fi
>    echo "$global_result: $test_name"
>    echo "$global_result: $test_name" | sed 's/./=/g'
>    echo
> -  cat $tmp_res
> +  cat $tmp_results
>    echo
>    echo --------------------
>    echo
> -  cat $tmp_out
> +  cat $tmp_output
>  } > $log_file
> -rm -f $tmp_out $tmp_res
> +rm -f $tmp_output $tmp_results $tmp_status
>  
>  ## And we're done.

Thanks,
Ralf



reply via email to

[Prev in Thread] Current Thread [Next in Thread]