The only metric that counts in KM
A recent article by Steve Denning throws light on the only KM metric that counts; Impact.
Image from wikimedia commons |
The article in Forbes, entitled “Why Agile Often Fails: No Agreed Metrics”, discusses the metrics systems used at Amazon, and compares them with those used when Denning was CKO at the World Bank. Denning makes the point that these metrics seldom make a difference.
“Someone asked the fateful question, that still haunts me: had any of us ever experienced a significant change in our organization’s behavior as a result of those metrics? We went around the room and the answers were unanimous: none of our organizations had ever changed their behavior significantly as a result of the wonderful sets of metrics that we had developed”.
He identified 4 types of metric commonly applied to change programs such as Agile and Knowledge Management; as follows:
“A good idea”: an activity that is undertaken because enough influential people believe that it is likely to have some benefits. These are often, as Scrum.org points out, “really just conjectures about what customers might like (sometimes referred to as HiPPOs, or Highly Paid Person’s Opinions) … The specification of the solution is vague and imprecise…” At worst, they are the hobby horse of some organizational faction promoting their own particular interest.
“An output”: something internal, measurable but not necessarily related to any external customer. This is better than a mere conjecture that it is a good idea, but still not getting the organization very far in terms of understanding the activity’s value.
“An outcome”: something external such customer satisfaction in relation to value delivered. It is often subjective and vague and fuzzy. The Net Promoter Score, which fits into this category, has been shown to be positively correlated with actual impact and is certainly better than not having any measure, but its meaning can be ambiguous and difficult to read.
“The impact”: changes in customer behavior that the product or service is intended to elicit. This goes beyond merely whether the customer buys the product or service and may include measures of actions—or non-actions—that you would expect if the customer is truly delighted, such as timely availability of the item, speed of delivery, percentage of unexpected “hiccoughs” in delivery, absence of returns and complaints, re-purchases of the product and related products, responses to surveys recommendation of the products to other customers, and so on.
The forthcoming Knowledge Manager’s handbook, also identifies 4 types of metric:
There are four main types of KM metrics, and each has a specific purpose. It is important not to confuse them, for reasons we explain below (in the book). Together they form an integrated system of measurement for ongoing administration, learning and governance. The four types are: activity metrics, performance (or compliance) metrics, maturity metrics and impact metrics.
Like Steve, we believe Impact metrics are the only ones that really count and that really make a difference, whether that is Impact to the customer, or Impact to the work of the organisation. As we say in the book:
The purpose of impact metrics is to enable senior management to assess the continuing contribution of KM to the business, and they should link back to the intended benefits and outcomes of KM outlined in your KM policy. …. Impact metrics can be “hard” in terms of quantifiable benefits to the business (e.g. reduced costs, improved sales, improved quality), in which case they will be derived from specific examples continuously gathered from your KM activity streams. Impact metrics may also be “soft” in the form of evaluations from managers in the different lines of business as to the added value that KM brings to the way that work is conducted. “Soft” evaluation metrics like this can be gathered through surveys, interviews or focus groups. These mechanisms, if conducted periodically, are also useful ways to identify opportunities to streamline, enhance or fine-tune the KM processes or the support that is offered by the KM operational team.
Steve contrasts the way he used metrics in the World Bank with the way metricas are used at Amazon.
Amazon’s metrics operate at the fourth level and aim to measure impact. Organizations that don’t have these kinds of metrics and the associated behavioral norms in place to support them are flying blind. It shouldn’t come as a surprise that they frequently crash. Why don’t intelligent organizations measure impact? One reason is that thinking through impact in advance isn’t easy … but it’s one of the rules of the road at Amazon. In fact, work on an activity or capability can’t start unless and until the team has figured out how it will measure customers’ response. Amazon builds in customer metrics as a “forcing function” from the outset. Teams may spend weeks just thinking through the metrics.
Steve believes, Amazon believes, and we also believe that Impact metrics are the only ones that count, and should be set in advance, and used to measure and drive and tune your KM implementation efforts. This is not easy, and you may need to “spend weeks just thinking through the metrics”, but anything else is “flying blind”. With impact metrics, you can set targets and measure whether those targets are met. You can change your direction based on whether those targets are met or not. You can use them to steer your program, and you can use them to measure the value contribution KM brings.
Leave a Reply