info-cvs
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: CVS and assesment


From: Paul Sander
Subject: Re: CVS and assesment
Date: Mon, 4 Jun 2001 11:06:20 -0700

A couple of the contributors to this thread seem to assume the following:

- Checking in untested code is bad.
- Running metrics automatically, whatever they are, is bad.

I disagree.

Metrics, like any other tool, have their good and bad sides.  They must
be chosen carefully and their results must be interpreted properly.
And several good metrics must be used to offset efforts to manipulate them.
Programmers, as both victim and implementor of these tools, must make sure
that these tools are used properly by their audience and that their
capabilities meet their expectations (or vice-versa).

As for checking in bad code, well, there are ways of handling that.  One
is to track sufficient state so that only good code is pulled out of the
version control system for builds and code sharing.  Another is to require
individuals to create branches and merge when their code is ready to see
the light of day.  I would NEVER discourage anyone from checking in code,
working or not, under the condition that the preferred method not break
the overall process.

--- Forwarded mail from address@hidden

Thomas Tuft Muller wrote:
> 
> Todd Denniston wrote:
> 
> | I will however only give you the first line of such an
> | abomination that I have
> | been forced to live under:
> | cvs checkout -r HEAD -p $MODULE_NAME 2>/dev/null > $TMP_FILE2
> |
> | It's up to you to determine what kind of tyrannical analysis of
> | $TMP_FILE2 you
> | want to do with a FORTRAN program or perl script.
> 
> I fail to see why some(?) programmers are so reluctant to have their work
> analyzed and assessed. I'm a programmer myself, and I'm pretty sure that
> such a tool could benefit good programmers and maybe expose the bad seeds.
> Sound competion with fellow workers has never hurt anyone. I mean, a lot of
> employees out there have their work scrutinized and analyzed every day. Why
> should we be any different? Do we think we are irreplaceable no matter how
> much and what we actually do?
> 
> Programmers constitute a very arcane society and I think a lot of companies
> would like to be able to assess the quality of the Software Development
> department as well as they do for other departments. The problem is that
> "Software Quality" is an understatement in a world where extreme programming
> emerges as the prevalent development process. I think (good) programmers
> should be the first in line for deploying tools and processes for assessing
> their own work. Or are we too scared?

Only that we will be dealing with _another_ lazy boss who thinks just because
the tool indicates it is bad that it is.  These bosses are lazy because they do
not want to spend the time to have the programmers understand each others code
(peer review), and manage the fact that you will occasionally get a programmer
who is a back biter.

> 
> --
> 
> Thomas
> 

> I wonder if anybody have som experience with using CVS to follow up
> work-quality, project progress, individual measurement of work amount done
> over specific periods of time etc, etc. I imagine a scenario where each
> programmer is forced to check in once a week (preferrably with a specific
> tag indicating that the files are possibly untested/uncompilable).

hey, if the problem is small enough to solve in a week fine, however being
forced to checkin code which may not be functional or may not be fully
functional and then justify it, is only disruptive.

> Proper
> scripts could analyze the code regarding inter alia number of
> lines/words/bytes,

Again only a manager who is lame brained (wait how many does than not describe)
would still look at lines/words/bytes instead of functionality added or bugs
squashed.

> commenting, commenting rules, coding-rules, 

Ok, this might make some sense, however peer review is better from the
perspective of most rules have some point where they break down in the
readability/maintainability of the code.

> class
> cohesion, method-length, class-length, parameter names, variable names, etc,
> etc. 

again, peer review still does this better and saner.

>In addition the scripts should also take into account the state of the
> archive last time the scripts were run, and analyze/provide statistics about
> the change.

Unless your scripts can check all of the functionality of the system or show
you which trouble reports were fixed, it is pretty useless.  Better to check
the commit comments and make the programmer put in something useful that can be
tracked back to the requirements or trouble reports, so you can use something
like http://www.red-bean.com/cvs2cl/ to generate your reports to your boss.

> 
> Combined with a weekly/monthly submitted timeplan from the programmers, this
> could be a valuable tool for managers to see the overall as well as
> individual progress/quality.
> 

If your programmers are any good the things you mention above will not be
valuable for measuring progress, you should be measuring system functionality
and how well the programmers estimate time for adding functionality and fixing
bugs.

--- End of forwarded message from address@hidden




reply via email to

[Prev in Thread] Current Thread [Next in Thread]