How to measure the waste in the KM supply chain

If KM is a lean supply chain for knowledge, how can you measure the amount of waste in the chain?

Shredded waste, image from wikimedia commons

I have often used the concept of a knowledge supply chain as a way of describing Knowledge Management; the supply chain being a mechanism for providing knowledge to the knowledge worker in an efficient and effective way, just as a materials supply chain provides materials to the manual worker. 

If you go one step further, you can use the principles of the lean supply chain, as applied to materials supply, to make the knowledge supply chain even more efficient. We do that by eliminating “the 7 wastes” of overproduction, waiting, unnecessary transport, non-value-add processing, unnecessary motion, excess inventory, and defects. 

But how can we measure the current level of waste in the knowledge supply chain? Here’s how.

  • Waste #1. Over-production—producing more knowledge than we need.

We might measure this by measuring how much of what is published is actually useful. We could for example look at the read-rates of content (how much content never gets read), or the duplication of content. For example the World Bank commissioned a study of “Which World Bank Reports Are Widely Read“, which was able to analyse which of the reports were widely downloaded and cited, and which remained unread, and therefore represent over-production.  A lot of effort and knowledge goes into these reports, and the last thing the World Bank wants is to create reports which are never downloaded.  We could also look at the push/pull ratio in communities of practice, balancing the number of question-led discussions against the number of publication-based discussions (see this analysis of linked-in discussions, for example). 

  • Waste #2. Waiting. 

Here we measure the clock-speed of knowledge, such as the time it takes for a community question to be answered, the time it takes to find relevant synthesised knowledge, or the time it takes for lessons to be a) collected and b) embedded into guidance.

  • Waste # 3. Unnecessary transport of materials. 

 In our knowledge management world, this really refers to hand-off, and we might measure the number of links or steps between knowledge supplier and user. Communities of practice, for example, where “ask the audience”-type questions can be asked, and answered directly by the knowledge holder, will minimise the number of handoffs. With a large community of practice, everyone is at One Degree of Separation.  A wiki, where knowledge suppliers can update knowledge themselves without going through unnecessary editorial process, can also minimised handoffs.

  • Waste # 4. Non-value added processing—doing more work than is necessary. 

We might measure this by looking at the degree of processing the end user has to do to get an answer to their question, and how much synthesising is done by the user, which could be done further up the supply chain.  For example, does a user have to read and understand all lessons in a database on a particular topic, or can you make sure that these have already been synthesised into guidance?

  • Waste # 5. Unnecessary motion. 

We measure this by counting the number of places the knowledge user needs to go to in order to find relevant knowledge. Do they have to visit every project file to find lessons, or are the lessons collected in one place? Is there one community of practice to go to, or many? Linked-in, for example,  had at one time 422 discussion groups covering the topic of Knowledge Management rather than only one. That is a waste of 421 groups (99.8% waste).

  • Waste # 6. Excess inventory

Like waste 1, we look at the unnecessary, duplicate or unread knowledge in the knowledge bases and lessons learned systems.

  • Waste # 7. Defects

Here we measure now much knowledge is out of date, and how much is poor quality. Some organisations, for example, measure the quality of lessons with a lesson management system, and often find that much of the content is of very poor quality. If your users are telling you that the lesson management is full of poor quality lessons, then you have a defect problem.

All of these metrics are indicators that your KM framework, or knowledge supply chain, is far from efficient. 

View Original Source (nickmilton.com) Here.

Leave a Reply

Your email address will not be published. Required fields are marked *

Shared by: Nick Milton

Tags: , ,