[Top][All Lists]
[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
[Monotone-devel] Patch to add Composite Benchmarks and some simple scala
From: |
Eric Anderson |
Subject: |
[Monotone-devel] Patch to add Composite Benchmarks and some simple scalable add/commit tests |
Date: |
Mon, 17 Jul 2006 23:55:46 -0700 |
Attached is a patch to add composite benchmarks and some simple
scalable add/commit tests to net.venge.monotone.contrib.benchmark. It
will apply to e2a6abe5da29231dd202b0fd8dc9cb791a8ff969 + the
server.alive patch I sent yesterday.
The idea behind the composite benchmark is that one can say:
benchmark(setup-empty-workdir, setup-create-random-files, measure-add,
measure-commit, measure-pull)
and the benchmark suite will first setup an empty working directory,
create a bunch of random files in that working directory and then
measure (using the instrumenter) the time to add, commit and pull the
resulting repository. This allows one to construct a series of
operations and test them all in sequence. The idea, although not
implemented is that one could change measure- to setup- and the work
would happen during setup not during measurement so that an expensive
instrumenter would only be run over the relevant parts.
The precise CLI syntax for the above would look like:
-b simple-add='CompositeBenchmark(SetupEmptyWorkdir(),
SetupWorkdirAddFileSource(PartRandomFileSource(1,1024,0.5)), MeasureAdd(),
MeasureCommit(), MeasureWorkingPull())'
Getting this all working took more changes that I expected. I'll go
through the patch in detail:
examples/composite-1.sh: an example of how to use the composite
benchmark and memtime instrumenter
mkrandom.c: a simple c program for generating random binary files; could be
extended to generate random text. The python code also implements this,
so it is not necessary, but is a lot faster.
file_source.py: various classes for generating (roughly) reproducable data
for use in the add/commit/pull testing. Currently can generate partially
random files, which is useful as monotone's resource usage changes
based on how compressible the data is. It can also checkout specific
revisions of monotone (approximately the released versions) both as a
directory tree and as a single large file. Finally includes the
composite source which lets you put multiple of these sources together.
Makefile: build mkrandom
benchmarks.py: split setup() into initial_setup() and run_setup(); I ran
into the problem that the current code will copy the setup directory to
the run directory, but if you setup a working directory, the database
path is hardcoded to the setup directory, not to the run directory.
Splitting setup lets programs do as much as possible not instrumented as
well as do work initially that will just be copied rather than re-executed.
I also made all the benchmarks depend on the Benchmark class to
inherit the default do-nothing methods. Finally I added the various
setup and measure benchmark classes shown above.
driver.py: update to take advantage of the split of setup(); add in the call
to run_setup()
instrumenter.py: catch an execution error so we can print out the command that
didn't execute, useful for debugging.
instrumenters.py: minor bugfixes to get MemTimingInstrumenterObj working.
Write out the timings file to /tmp rather than the current
directory. This avoids the problem of mtn add operations
having to know the list of all files that are going to be added, and
instead lets us just add all the files in the working directory without
accidentally picking up a timing file.
mtn.py: add in operations needed to get the new benchmarks to work;
there is some redundancy here with what Nathaniel started
implementing, I didn't try to rectify that.
namespace.py: import the file sources.
-Eric
composite-benchmark.patch
Description: Binary data
- [Monotone-devel] Patch to add Composite Benchmarks and some simple scalable add/commit tests,
Eric Anderson <=