Book review: Software Metrics

This book review covers Software Metrics: Establishing a Company-Wide Program by Robert B. Grady and Deborah L. Caswell (Prentice-Hall, 1986).

software-metrics-cover-scan

Something Different

I thought I was getting a book on measuring developers’ individual personal productivity.  I lacked understanding of how it would be possible to do this without being overwhelmed by unintended side-effects, and I thought that reading a case study of a successful implementation would help me understand better and maybe be convinced. That’s not what this book turned out to be, though.

The book’s purpose is actually to help others set up a company-wide software metrics program.  The case study this book presents was an effort at HP to apply metrics to their software products and processes rather than their people.  Lots of good stuff in here, though. Let’s see…

Problem Solving Highlights

A primary benefit of the book for me was to see Grady and Caswell’s problem solving skills in action and learn from them.  Here are some things they do in the book that I found valuable:

  • Keeping the Goal in View. In any effort this size, it would be easy to lose focus on one’s real purpose — what you’re trying to accomplish.  We catch glimpses of the authors keeping the goal in sight when they speak in terms such as these:

    …we now had our own database of metrics information to help guide us. But guide us where? …Data is meaningful when it is relevant and significant. Therefore, accuracy and applicability of both the data and the model must be better than one’s intuition (p. 151);

    and

    For complexity metrics to be of value, they must be used to create appropriate flags or trigger actions (p. 204).

  • Rationale and Historical Context.  The authors don’t just say what they did, but the circumstances surrounding the decision and what it was that led them to decide the way they did.  For instance, on the topic of manual versus automatic data collection:

    A common complaint is that the whole collection process should be automated, leaving only the results to be analyzed. However, there is a positive aspect in doing manual data collecting. It gives the manager a detailed awareness of the data and its significance. Forms and questionnaires expedite the manual processes. Also, automation too early may “freeze” useless measures into the project process. Some experimentation is always needed at first (p.117).

    As another example, in addition to telling how and why they revised the class training objectives and content (Chapter 13), they show the first revision of their data collection forms, how they changed and why (see Appendix B).  So again, rather than just being given the answer, you see how they went about analyzing and solving the problem.

  • Whose Problem Is It? What do you do when people don’t do as it has been agreed they should do?  As Gause and Weinberg said, Whose problem is it? When things didn’t go as expected with the new metrics program, the authors said, “This is our problem” and modified the program accordingly.  Once again, they were focused on the goal of having the program succeed.  Examples:
    • Early feedback from a new metrics training class led them to rework the course’s objectives as well as content (Chapter 13).
    • They responded to “required” fields not being filled in in the data collection forms, not by increasing the pressure on managers to supply the information, but by digging deeper to find out why the information wasn’t being filled in (and ultimately splitting the form into two so that the smaller form for required data wasn’t so intimidating) (p. 258).
  • Risk Management in action. Should the project name and/or project manager name be attached to the metrics data?  And what level of care is needed in keeping such information confidential? It’s instructive to see their thought process in deciding these issues (p. 136).
  • Concepts of organizational transitions. The team anticipated and planned for the natural stages people were going to be going through when they encountered the changes proposed to support the metrics program:  “All changes require individuals to adapt to new circumstances.  It is a grave mistake to assume that any announcement, no matter how insignificant, is the end of the process.  All transitions begin with an ending” (p. 91).  The authors quote  a paper titled “Managing Organizational Transitions” by William Bridges, where Bridges outlines three phases people go through in such situations: letting go of the old situation; going through “a difficult ‘wilderness’ time in the gap between the old reality and the new one”; “Then (and only then) emerg[ing] with new energy, purpose, and sense of self to make a new beginning.”
  • There seemed to be a natural process of discovery that took time. Primary metrics (the metrics directly based on the data you gather) were not sufficient to support process analysis and improvement; secondary metrics (calculated from the primary metrics) were generally needed for that purpose.  The authors go on, however, to add the helpful detail that in their experience you didn’t tend to be able to jump right to those metrics:

    Each project uses a slightly different development process. Understanding a process involves defining what the process is and deciding what metrics make sense to collect. We will call these more detailed metrics secondary metrics, not because they are less important, but because the need for them was recognized only after a basic understanding of the primary metrics was reached at a given division (p. 105).

Other Highlights

  • Gestalt. One of the forms (p. 103) for gathering data about a defect contains the question

    How was problem found?

    One of the choices is

    gestalt (flash of inspiration to try something)

    Gestalt!  What a fun concept – I think I have quite a bit of that.  Perhaps I should add it to my resume… :)

  • Ishikawa. This book contains my first introduction to an Ishikawa or “Fishbone” diagram, which is a simple, yet expressive way to depict contributing causes to an effect.  Seems like this could come in handy for whiteboard discussions.
  • Estimation. This book reinforced some themes from the Software Estimation book: the importance of reestimation; the cone of uncertainty (pp. 143-144).  And there is a story about how good estimation early on helped the project by pointing out a scheduling issue early enough that it could be addressed by project management(pp. 156-157).
  • Kinds of quality. This was a really eye-opening concept for me: we can’t just “improve the quality” in some generic sense.  As the authors say:

    For example, adding a new function might improve functionality but decrease performance, usability, and/or reliability.  Since each project has different priorities, it is necessary to make an early decision of what kind of quality is most important (p. 159).

  • The need to look at productivity and quality together. When you look at productivity (lines of code or whatever your metric measures) alone, you don’t get the full picture.  As the authors say, “Looking at productivity alone could encourage efforts which result in productivity improvement at the expense of quality” (p. 170).

Overall

I found the book to be a very worthwhile read and would recommend it not only to those contemplating the implementation of a company-wide metrics program, but also for any problem solver — it’s a great opportunity to get inside the heads of a couple of master problem solvers and learn from how they think.

Advertisements
  1. Leave a comment

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s