How to identify a knowledge "near miss"

In organisational safety management, they identify a “near miss” as evidence that safety practices need to be improved.  We can do the same in knowledge management.

Image from safety.af.mil

I have often used Safety Management as a useful analogue for KM, and here’s another good crossover idea.

In safety management they identify safety breaches (accidents, injuries, “lost time incidents”) as metrics and indicators that safety management needs to be improved.

They also track “near misses” – incidents where nobody was harmed, but only by luck, or “unplanned events that did not result in injury, illness or damage – but had the potential to do so“. A hammer dropped from height and landing a few feet away from a worker on the ground, a bolt blown past someone’s head by an escape of compressed gas, a near collision between two aircraft, all are examples of near misses indicating that safety management needs to be improved. 

In KM we can  track lost knowledge incidents, where time, money or effort was wasted because knowledge should have been available but “got lost” along the way. The knowledge is or was once available to the organisation, but failed to reach the person who needed to act upon it, with resulting cost to the organisation in terms of recovery cost, rework, lost sales, delay etc. If you are lucky you can quantify this cost as part of the Cost of Lost Knowledge, aka the Cost of Ignorance, and use this in your KM business case.

But we can also track Knowledge Near Misses. This is where the knowledge was not lost and no cost therefore incurred, but it was only found or transferred by lucky chance.

I heard a great example recently in a client organisation (and I paraphrase below).

The organisation was planning an activity. It seemed a little risky but quite doable, and there was management pressure to go ahead. They were discussing this activity in a meeting, and someone from another part of the business who happened to be in the meeting by chance (he was not invited to discuss this particular activity) spoke up and said “I was part of a team that tried this before. It was a complete disaster, and we are still recovering from the mess it created”.

The lessons from this previous project had not been captured, they were not in the lessons database, and the project report was not findable but buried in a mass of project files on a hard drive somewhere. Had that person not by chance been at the meeting, the “complete disaster” would most likely have been repeated with resulting costs in manpower, money and reputation.

This was a knowledge near miss. This event did not result in cost to the organisation through lost knowledge, but had the potential to do so, and was only avoided through luck. With a proper KM framework in place, and followed by all staff in a systematic way, this knowledge would not have been lost, and the planned activity could have been assessed in the full light of historic lessons.

You can find another KM near miss story here

The knowledge near miss is a useful metric which provides evidence of the value of, and need for, effective KM.

View Original Source (nickmilton.com) Here.

The only metric that counts in KM

A recent article by Steve Denning throws light on the only KM metric that counts;  Impact.

Image from wikimedia commons

The article in Forbes, entitled “Why Agile Often Fails: No Agreed Metrics”, discusses the metrics systems used at Amazon, and compares them with those used when Denning was CKO at the World Bank. Denning makes the point that these metrics seldom make a difference.

“Someone asked the fateful question, that still haunts me: had any of us ever experienced a significant change in our organization’s behavior as a result of those metrics? We went around the room and the answers were unanimous: none of our organizations had ever changed their behavior significantly as a result of the wonderful sets of metrics that we had developed”.

He identified 4 types of metric commonly applied to change programs such as Agile and Knowledge Management; as follows:

A good idea”: an activity that is undertaken because enough influential people believe that it is likely to have some benefits. These are often, as Scrum.org points out, “really just conjectures about what customers might like (sometimes referred to as HiPPOs, or Highly Paid Person’s Opinions) … The specification of the solution is vague and imprecise…” At worst, they are the hobby horse of some organizational faction promoting their own particular interest. 

An output”: something internal, measurable but not necessarily related to any external customer. This is better than a mere conjecture that it is a good idea, but still not getting the organization very far in terms of understanding the activity’s value. 

“An outcome”: something external such customer satisfaction in relation to value delivered. It is often subjective and vague and fuzzy. The Net Promoter Score, which fits into this category, has been shown to be positively correlated with actual impact and is certainly better than not having any measure, but its meaning can be ambiguous and difficult to read. 

“The impact”: changes in customer behavior that the product or service is intended to elicit. This goes beyond merely whether the customer buys the product or service and may include measures of actions—or non-actions—that you would expect if the customer is truly delighted, such as timely availability of the item, speed of delivery, percentage of unexpected “hiccoughs” in delivery, absence of returns and complaints, re-purchases of the product and related products, responses to surveys recommendation of the products to other customers, and so on.

The forthcoming Knowledge Manager’s handbook, also identifies 4 types of metric:

There are four main types of KM metrics, and each has a specific purpose. It is important not to confuse them, for reasons we explain below (in the book). Together they form an integrated system of measurement for ongoing administration, learning and governance. The four types are: activity metrics, performance (or compliance) metrics, maturity metrics and impact metrics.

Like Steve, we believe Impact metrics are the only ones that  really count and that really make a difference, whether that is Impact to the customer, or Impact to the work of the organisation. As we say in the book:

The purpose of impact metrics is to enable senior management to assess the continuing contribution of KM to the business, and they should link back to the intended benefits and outcomes of KM outlined in your KM policy. …. Impact metrics can be “hard” in terms of quantifiable benefits to the business (e.g. reduced costs, improved sales, improved quality), in which case they will be derived from specific examples continuously gathered from your KM activity streams. Impact metrics may also be “soft” in the form of evaluations from managers in the different lines of business as to the added value that KM brings to the way that work is conducted. “Soft” evaluation metrics like this can be gathered through surveys, interviews or focus groups. These mechanisms, if conducted periodically, are also useful ways to identify opportunities to streamline, enhance or fine-tune the KM processes or the support that is offered by the KM operational team.

Steve contrasts the way he used metrics in the World Bank with the way metricas are used at Amazon.

Amazon’s metrics operate at the fourth level and aim to measure impact. Organizations that don’t have these kinds of metrics and the associated behavioral norms in place to support them are flying blind. It shouldn’t come as a surprise that they frequently crash. Why don’t intelligent organizations measure impact? One reason is that thinking through impact in advance isn’t easy … but it’s one of the rules of the road at Amazon. In fact, work on an activity or capability can’t start unless and until the team has figured out how it will measure customers’ response. Amazon builds in customer metrics as a “forcing function” from the outset. Teams may spend weeks just thinking through the metrics.

Steve believes, Amazon believes, and we also believe that Impact metrics are the only ones that count, and should be set in advance, and used to measure and drive and tune your KM implementation efforts. This is not easy, and you may need to “spend weeks just thinking through the metrics”, but anything else is “flying blind”. With impact metrics, you can set targets and measure whether those targets are met. You can change your direction based on whether those targets are met or not. You can use them to steer your program, and you can use them to measure the value contribution KM brings.

Remember, if KM is not making an impact, then its not worth doing. So make sure you measure and report that impact.

View Original Source (nickmilton.com) Here.

7 Metrics for the KM supply chain

The Supply Chain analogy for KM suggests several metrics we can use.

I have often used the analogy of the supply chain as one way of thinking about KM. This involves looking at KM as a chain of processes supplying knowledge to the user.

This analogy has the benefit of thinking about KM from the point of view of the knowledge user. You can ask “If a person in this organisation were in need of a specific piece of knowledge to make a specific decision, what system is in place to make sure that this knowledge a) gets to the person on time, and b) is of the correct quality?”

And like any analogy, it brings with it many other ways to think about KM. Can we apply “Lean Supply Chain” thinking to KM, for example? Can we remove waste from our Knowledge Supply Chain? Can we think of the Knowledge Manager as a supply chain manager?

Or – the subject of our blog today – can we use common Supply Chain metrics to help us understand how to metricate KM?

Here are 7 metrics from the supply chain world which might help us decide on metrics for our Knowledge Management Framework.

  • Backorders – unfulfilled orders from the customer. In KM terms, these might be search queries, or questions to a Community of Practice, which receive no answers. These are indications of the need to create knowledge resources for the user, and the number of unfilled requests is a proxy of the completeness of your knowledge base (both tacit and explicit).

  • Cycle time. There are many definitions of cycle time in the Supply Chain world, but for KM the crucial cycle time is how long it takes from the first observation of new knowledge, to that knowledge being embedded in the knowledge bases, training courses and community of practice resources. Or in lesson-learned terms it might be the time from “Lesson identified” to “Lesson closed”. In CoPs it might be the “question to answer” time.
  • Defects – defective supplied material. This is a quality measure of your knowledge content, measuring how much of it is out of date, wrong, or unhelpful. You could measure the quality of lessons entering your lessons management system for example, or of articles published to a knowledge base, or of answers in a community forum.
  • Fill Rate – the amount of ordered supplies filled on the first order. In KM, this might be the number of community questions answered by the first response, or the percentage of times the answer is found in the first search.
  • Inventory costs – what it costs you to stock and manage your inventory (cost of stock, cost of warehouse, salaries of warehouse staff etc). In KM terms, this is the cost of operating your KM framework, including the cost of KM roles, the licence cost for KM software, and the time cost from populating the system. This represents the total costs to the business of operating KM.
  • Gross margin return on inventory – the  gross margin divided by the inventory costs, a popular metric for retail stores. In KM terms, the gross margin would be the overall value of KM to the business, which you would track and estimate through success cases, value stories and metrics such as decreased costs or increased sales. It is in effect the KM ROI.
  • Inventory turnover – the average annual use of your inventory; for example if a store carries 1000 items and sells 10,000 items a year, that’s a 10 times inventory turnover. In KM terms this would be applied only to explicit knowledge, and you would measure the number of reads of knowledge articles divided by the number of articles.  You could of course get smarter, and you could look at which articles get the most reads and which get none at all.

Hopefully that gives you some ideas of a few more metrics you can use to make sure your Knowledge Supply Chain is working – delivering valuable knowledge to the knowledge works in your organisation in an efficient, reliable and effective way.

View Original Source (nickmilton.com) Here.

Expectation, metrics, rewards, support – the KM Governance quartet

Four elements make up Knowledge Management Governance. Expectations, metrics, rewards and support.

Governance is often the missing element in Knowledge Management, and although it is one of the four legs on the KM table, it is the one that gets least attention.  This is partly because governance is not easy, and partly because there is no clear published model for KM governance.

Governance represents the things that the organisation does, and the management of the organisation does, that drive the KM behaviours and adoption of the KM Framework. We see four elements to governance – expectations, metrics, rewards and support.

Knowledge Management Expectations.

The first thing management needs to do in terms of governance is to set the expectations for KM. This requires a set of clear corporate expectations for how knowledge will be managed in the organization, including accountabilities for the ownership of key knowledge areas, and the definition of corporate KM standards, KM principles and KM policies. These documents should tell everyone what is expected of them in Knowledge Management terms.

Different departments can then add to these expectations, and individuals with KM roles will have KM expectations written into their job description (see examples here).  Within a project, the expectations are set by the Knowledge Management Plan.  Expectations may also be set using the competency framework.

If there are no clear expectations, nobody will know what they should be doing in KM terms.

Knowledge Management Metrics.

If standards and expectations have been set, then the organisation needs to measure against these expectations. For example, if the corporate expectation is that every project will conduct a lesson learned session, and every knowledge topic has an owner, then you should measure whether this is happening.
There are other types of KM metric as well – see these blog posts for more discussion.

If there are no metrics, then nobody will know what people are actually doing in KM.

KM rewards and recognition.

If you are measuring people’s performance against the expectations, then this needs to be linked to rewards and recognition. If people do what they are expected to, this should be reflected in their rewards. If they don’t do what is expected, then there should be a sanction. See these blog posts for a wider discussion of incentives.

If there are no links between metrics and reward/recognition, then nobody will care about the metrics. Particularly important are the sanctions for not doing KM. If people can dodge their expectations and get away with it, then this sends a strong message that the expectations are actually options, and not expectations at all.

Knowledge Management support

It is unfair to set expectations, measure people against them, and then reward people based on these measures, unless you make the expectations achievable in the first place. Therefore you need to set up the systems, the training, the coaching, reference materials and so on, that make it possible for people to meet their expectations.

If there is no support, then you have set up an unfair system which people will resent.

Together, the quartet of Expectations, Metrics, Reward/recognition and Support form the basis of an effective Knowledge Management governance system.

View Original Source (nickmilton.com) Here.

Do your KM metrics cover seeking and using as well as sharing?

When it comes to determining Knowledge Management Metrics, make sure you cover the Demand side as well as the Supply side. 

Image from wikimedia commons

The most difficult aspect of Knowledge Management to address is re-use, and yet re-use of knowledge is the whole point of KM. All of the discussing, capturing, documenting and storing of Knowledge is in service of re-use. Therefore when we metricate KM, we need to think about metrics for Knowledge Demand and Re-use, and not just Supply.

It is easy to create metrics for Knowledge Supply, for example:

  • Number of lessons added to the Lessons Database
  • Number of blogs
  • Frequency of articles on the community blog
  • “Best blog post”
  • Number of new items in the portal
  • Frequency of edits and updates to items on the portal
  • Number of wikipages
  • Individuals who make most contributions to the knowledge base
  • “Best knowledge base article”

It’s not so difficult to cover the demand side as well, through metrics such as:

  • Number of questions asked per month on the community forum
  • Time between question and first answer
  • Number of answers per question
  • Number of readers of the community blog
  • Number of reads per knowledge asset or knowledge article
  • Frequency of searches of the knowledge base
  • Search success rate
  • “Time to find” knowledge
It’s a little harder to measure re-use, but it can be done through metrics like these:
  •  instances of lessons reuse
  • evidence of Community value, delivered through solutions to members’ problems, and presented as success stories
  • user feedback and satisfaction ratings
  • number of lessons which have been embedded into procedure 
  • average time taken to embed lessons

The demand metrics and re-use metrics can be very interesting. For example one of the ways the World Bank disseminates knowledge to external stakeholders is by publishing reports. It would be easy just to measure the number of reports created, but in addition they commissioned a study of “Which World Bank Reports Are Widely Read“, which was able to analyse which of the reports were widely downloaded and cited, and which remained unread.  A lot of effort and knowledge goes into these reports, and the last thing the World Bank wants is to create reports which are never downloaded.

Demand-side and re-use metrics such as these are very important to the success of your KM program.

Make sure your metric system is well balanced, covering supply, demand and re-use.

View Original Source Here.