On Tue, Sep 20, 2011 at 02:08:42AM +0200, Reinhold Kainhofer wrote:
Am Tuesday, 20. September 2011, 01:09:20 schrieb Graham Percival:
** Different patch and issue managment tools
* 1-3 hours: write a script that checks that every Patch-new
can apply to master, compiles correctly, and creates a
regtest comparison so the local human can check it and make
it Patch-review instead. If there’s a problem before the
regtest comparison, the script automatically changes it to
Patch-needs_work.
The problem is that if someone pushes a broken commit, it will cause all
patches to Patch-needs_work, even if the patch is not to blame...
That's why the script would/should check that master compiles,
before trying any of the patches. Naturally, if master fails, it
sends a panic email to lilypond-devel. That email would include a
list of all people who pushed to master since the last commit
which is known to compile.
Whatever happens with the patch tools, I'm imagining a "does
master compile" script running at least once every 24 hours.
* 1-5 hours: automatically switch any Patch-review to
Patch-needs_work if there are any non-LGTM comments.
If one of my comments does not contain "LGTM", that does NOT mean that I have
objections. Rather, I might be giving some input and ideas, or have a
question, but I just don't feel qualified enough to give the go. If I object
to a patch, I clearly state it. Absense of LGTM does definitely NOT mean my
objection.
That's a question of how much intelligence+time we want to put
into this script... and of course the behaviour of any of these
scripts can (and should!) be overridden by the Patch Meister (or
any developer, really) at the first sign of trouble.
Even if we have a completely stupid "LGTM or bust" patch, the
Patch Meister just weighs:
- examine all patches from the countdown, then mark them
Patch-needs_work or Patch-push?
- examine all patches marked Patch-needs_work from a script, and
check that it's valid
- examine nothing until a programmer complains about a false
Patch-needs_work.
I think a false negative would happen about 10% of the time. Is
it worth it? maybe yes, maybe not... hey, we could even throw
some Bayesian machine learning into the process! :)