We are two sprints into a newly-adopted agile process. Somehow, the concept and term of "efficiency" made its way into our sprint planning process, with the idea being that we should pad our sprint-level expectations with a little bit of non-task time to account for day-to-day life within a project team, such as meetings and underlying architectural modifications that can’t be directly associated with a given business use case.
The Metric That Got Away
During the demo (to a large portion of the company) after our second sprint, our Project Manager dropped in a reference to the fact that our "efficiency" rating hit 75%, and that we would be basing our 3rd sprint expectations on the previous sprint’s 75% metric.
The developers had a bit of a fit (behind closed doors). In most of our experiences, "efficiency" ratings applied to iterative sprints are calculated on a 3-sprint (at least) rolling average. This 75% number was completely ignoring the fact that in first sprint, we had only achieved an "efficiency" in the low 60’s. The PM acquiesced and agreed to drop the "efficiency" rating to a 2-sprint rolling average of 70%.
When the PM then informed the CTO that the targeted efficiency rating was being dropped, all hell let loose. Evidently the C-level officers had also grabbed ahold of the 75% "efficiency" score and were bandying that number around the board room.
The Exact Wrong Thing
For me, his "efficiency" rating is not a bad internal metric to have, but only used as a general guideline. What had happened was that upper management got a hold of the numbers and were starting to look at them as a performance measure, i.e. "we need to get your efficiency up", equating efficiency with productivity. (This last sentence is paraphrased from one of Wayne Allen’s blog post concerning a parallel fear with "velocity".)
I have another problem with this "efficiency" term. It implies the other (100-X)% of time is lost to "inefficiency", which is patently false. This other time is also used for fighting the infrastructure debt that a project inevitably collects.
(I wish we could think of other terms rather than efficiency/inefficiency. A lot of agile people refer to "velocity", but I think that is a different concept of comparing actual time to estimated time, which is almost the inverse of our "efficiency" calculation. "Velocity" may have a similar application, but it is technically different from our current approach.)
The Right Things
I’ve spent some of my "inefficiency" time over the past two sprints getting a bunch of Code Quality reports running in our Maven build. Today the PM and I worked together to start publishing a quick dashboard of the metrics coming out of these Code Quality reports.
Now, if people start grabbing onto metrics, they will be the type of metrics that we actually want to drive. "Oh, you’ve got 1000 Checkstyle violations, 125 FindBugs warning, 75 TODO’s, and the Cobertura report says 66% of your code is not covered by tests? Why don’t we spend some time working those numbers down?" Now, when we take the time to refactor or clean up the code, we’ll have the metrics proving that we’re doing something positive, and not just being "inefficient".