1

Two Guidelines for Metrics

 3 years ago
source link: https://blog.odd-e.com/yilv/2015/09/two-guidelines-for-metrics.html
Go to the source link to view the article. You can view the picture content, updated content and better typesetting reading experience. If the link is broken, please click the button below to view the snapshot at that time.
neoserver,ios ssh client

Two Guidelines for Metrics

By Lv Yi on September 15, 2015 11:19 AM

During the recent Advanced ScrumMaster course, two guidelines for performance metrics emerged.

Continuous Improvement over Performance Evaluation

"Measuring and Managing Performance in Organizations" by Robert D. Austin is a great source for the topic of metrics. He made the distinction in terms of the purpose of using the metrics, either as motivational or as informational.

Motivational purpose ties to Performance Evaluation, while informational purpose ties to Continuous Improvement. Metrics could be the same, but used with different purposes. Take the example of unit test coverage. Management uses this metric as one KPI for team, the higher, the better. This is motivational. Team itself may find that coverage provides meaningful insights to guide their improvement in unit testing, thus, they decide to measure it. This is informational.

Austin claims that all metrics with motivational purpose inevitably lead to measurement dysfunction to some extent. This is what we see in reality that some teams don't add any check/assert in their unit test purely for the sake of achieving measured coverage target.

This leads to the first guidance: Continuous Improvement over Performance Evaluation

Collaboration over Accountability

When you cannot yet avoid motivational use of metrics, the technique of measuring up helps. Traditionally, we measure based on the span of control. By measuring up, we measure based on the span of influence.

Take a couple of examples. Traditionally, we measure developers based on the development output, e.g. lines of code; we measure testers based on the testing output, e.g. number of bugs found. By measuring up, we measure the collective output from the cross-functional team. This applies at broader scale too. Suppose that multiple teams work on different features and those features together form a complete customer scenario. Traditionally, we measure single team's output, e.g. delivered feature. By measuring up, we measure the collective output by multiple teams, e.g. delivered customer scenario.

Through measuring up, we promote the bigger common goal, thus more collaboration. Would this sacrifice clear accountability? Probably yes, but does it matter?

Yves Morieux's Ted Talk "How too many rules at work keep you from getting things done" makes a great point on this question.

In relay race, who's to blame (sorry, who's accountable:) when the baton drops? It's not clear, so, in order to have more clear accountability, you may introduce a third person, whose sole responsibility is to take the baton from one and pass that to the other, thus, he would be accountable when it fails. This is a system designed for failure, failure with clear accountability. However, would that help win the race?

Moreover, you may try measuring one-level up, thus, still take accountability into consideration to some extent.

This leads to the second guidance: Collaboration over Accountability

Conclusion:

So often, people ask what metrics we shall use in Agile context. In my humble opinion, these two guidances are more critical for your success in effectively using metrics.

Note: special thanks to my co-trainer Sun Ni for great insights.


About Joyk


Aggregate valuable and interesting links.
Joyk means Joy of geeK