guile-user
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

benchmarking framework


From: Dirk Herrmann
Subject: benchmarking framework
Date: Sat, 20 Jul 2002 04:13:30 +0200 (CEST)

Hi folks,

I have taken the freedom to add a benchmarking framework to guile.  It is
a simple adaptation from the testing framework.  A lot of code has been
copied from there.  Well, maybe at some time it would be better to extract
the common parts into a module.

Whatever, here's a short introduction:  benchmarks are placed in the
directory benchmark-suite/benchmarks.  They are ordinary scheme files, but
with the .bm extension.  At least these files are detected automatically.
Others can also be specified explicitly.

Within the benchmark files, you can introduce benchmarks with the
(benchmark ...) macro.  The syntax is as follows:
  (benchmark NAME ITERATION body...)
where NAME is a name for the benchmark, similar to the names in the
test-suite (btw, you also have with-benchmark-prefix...), ITERATION is the
number of times the body shall be executed.  And the body finally is the
code to be benchmarked.  Example:
  (define bignum (1- (expt 2 128)))
  (let* ((i 0))
    (benchmark "bignum" 130000
      (logand i bignum)
      (set! i (+ i 1))))
This will run the statements (logand i bignum) and (set! i (+ i 1))
130000 times (this, however, can be scaled - see below).  The time for
this is measured and written out.  The execution counts in the small
examples are chosen such that on my machine this results in an execution
time of about one second for each benchmark.

You start the benchmarking with the command ./benchmark-guile.  You have
the same options as with ./check-guile, except for --flag-unresolved,
which is test specific, and --test-suite is renamed to --benchmark-suite.  
There is an additional option --iteration-factor NUM, which allows to
scale the execution time for benchmarks:  As can be seen from the exmample
above, every benchmark is given an iteration count, which indicates how
often the benchmark is to be executed.  With the option --iteration-factor
NUM you can increase or decrease the execution count of the benchmarks and
thus influence the time needed for performing the benchmarks.  For
example, running the benchmark suite with --iteration-factor 0.5 will
about halven the execution time, since all benchmark's bodies are executed
about half as often.

Results are written to a log file (this file contains a lot of data) and
to the console (still a lot of data, but not quite as much).  The values
have the following meaning:
  total:     total execution time (this is what the unix time command
             reports as real time).
  user:      user time (this is reported as user time also by the unix
             time command)
  system:    system time, similar to the unix time command
  frame:     this is the part of the user time that is consumed by the
             benchmarking framework itself.  You can think of this as
             the time that would still be consumed, even if the
             benchmarking code itself was empty.  This value does not
             include the time for garbage collection.
  benchmark: This is the part of the user time that is actually spent
             within the benchmarking code.  That is, the time needed
             for the benchmarking framework is subtracted.  This value,
             however, includes all garbage collection times.
  user/interp: This is the part of the user time that is spent in the
             interpreter (and not in garbage collection)
  bench/interp: This is the part of the benchmark time that is spent in
             the interpreter (and not in garbage collection).  This
             value is most probably the one you are interested in, except
             if you are doing some garbage collection checks.
  gc:        The time spent in garbage collection.
However, there are some caveats when using the values:  The frame time is
estimated based on running an empty benchmark during startup and measuring
that time.  This can be somewhat inaccurate.  This value, however, is
then used to compute the benchmark and bench/interp times.  I don't know
about the accuracy of the other values as reported by (times) and
(gc-run-time).

Best regards, 
Dirk Herrmann




reply via email to

[Prev in Thread] Current Thread [Next in Thread]