5 design principles for a KM platform – lessons from IDEO

IDEO based their KM platform around 5 design principles. Here they are.

Image from wikimedia commons

Here is a fascinating article by Doug Solomon, entitled “The Tube: IDEO Builds a Collaboration System That Inspires through Passion.” It describes how IDEO, the famous design and innovation agency, built themselves a KM platform – “The Tube” – after discovering that there was nothing off-the-shelf that fitted their culture and requirements. One key lesson was as follows:

“The unique success of the Tube comes from the insight that effective knowledge sharing is a social activity that is enabled by technology, rather than a technological solution bolted onto an existing work culture”.

This is a very interesting way to look at it – as platform for social activity rather than as a technology – and is a refreshingly different approach from a well-respected organisation.

IDEO built the Tube based on 5 design principles, listed below with commentary from the article, and from me.

 1) Build Pointers to People: rather than trying to take all of the knowledge out of people’s heads and store it in a giant database, focus primarily on helping people to identify and connect to knowledgeable people. We know KM is about Connecting as much as (if not more than) it is about Collecting. So make sure your platform does both.

 2) Build Rewarding Systems: a system that requires altruism is unlikely to be successful. Similarly, systems that require users to participate (e.g. compliance-based design) rarely get anything more than participation at the lowest required level. Effective knowledge sharing systems must have a “Whats in it for me” for the users and contributors. IDEO saw The Tube as a platform to allow employees to showcase their best work.

 3) Demand Intuitive Interfaces: The system must present as few points of friction as possible from the process of becoming an active user. It must be easy and intuitive to use. Remove all barriers to adoption!

 4) Take the Road More Travelled: if a tool requires people to go out of their way to use it, adoption will always be a challenge, no matter how wonderfully designed. Wherever possible, strive to integrate tools into existing work processes. If people are used to receiving notifications through email, then link your system to email. Don’t expect people to develop a new work habit just to be able to share knowledge, because they won’t. For example, the IDEO blogging system didn’t take off until the team added a program that sent digest emails with new content from subscribed blogs.

 5) Iterate Early and Often: building effective systems for organisations means designing tools and workflows that mirror the social systems they are meant to support, which means multiple cycles of iteration between the platform design team and the users. We talked about this in terms of the KM Framework as a whole, when we described multiple pilot projects, and releasing a KM “minimum viable product” so the KM team can evolve their KM solution to fit the needs and working habits of the users.

If you need to design your own KM platform, these 5 principles should be on the first page of your design strategy.

View Original Source (nickmilton.com) Here.

What technologies are used for KM, and what value do they deliver?

Here are some more results from our 2014 and 2017 Global Survey of Knowledge management; a plot of KM Technology usage and value.

We asked the survey participants to rate a range of different types of technology by the value they have added to their KM program, giving them the options:

  • Large value
  • Moderate value
  • Slight value
  • No value
  • Too soon to tell
  • Do not use

 513 people answered this question.

The answers allow us to look not only at the usage of the technology, but also (through a weighted average of the first 4 reponese) the value that it delivers.

The chart above shows the survey results in order of value, as a stacked bar chart, with the weighted value shown as a line (this line would be at 100% if all the participants that used this technology claimed it had “high value” and at 0 they all claimed it had no value).

The top of the dark grey area represents the usage percentage for these technologies (the light grey area above represents people who do not use this technology). The top of the green area represents the percentage of people who said this technology had added “large value”.

The technology types are listed below in order of usage, and in order of value.

Technology type in order of usage 
(most common at the top)
Technology type in order of value delivered  when used (most valuable at the top)
1. Document collaboration
2. Best practice repository
3. People and expertise search
4. Portals (non-wiki)
5. eLearning
6. Enterprise content management
7. Enterprise search
8. Question and answer forums
9. Blogs
10. Lessons Management
11. Video publication
12. Wikis
14.Social media other than microblogs
16. Expert systems
17. Data mining
18. Innovation funnel
19. Semantic search
1. Enterprise search
2. Best practice repository
3. Document collaboration
4. Enterprise content management
5. Portals (non-wiki)
6. People and expertise search
8. Question and answer forums
9. Lessons Management
10.Expert systems
11. Brainstorming/ideation/crowdsourcing
12. Social media other than microblogs
13. Video publication
14. Wikis
15.Innovation funnel
16. Data mining
17. Semantic search
18. Microblogs
19. Blogs

What does this tell us?

We could take these results at face value, and say that the chart and the lists above represent the usage of the various technology types and (independently) the value of the various technology types.  The strong correlation between usage and value that we see in the chart and lists could represent a tendency for the more valuable technologies to get the greatest use. This is a perfectly valid interpretation.

An alternative argument would be to say that technologies deliver more value the more they are used. Technologies at the top of the list are mainstream technologies, used frequently, and delivering high value. Technologies at the bottom of the list are less mainstream, and deliver less value to the companies that use them, because those companies make less use of these technologies. This is also a plausible interpretation.

Even with this interpretation, we could still look for “Good performing” technologies which deliver more value than their popularity would imply, and “Poor performing technologies” which deliver less value than their popularity would imply.  Under this interpretation, the best performing technologies are Enterprise Search and Expert Systems (both of them 6 places higher in the Value list than the Usage list) and the worst performing technologies would be Blogs (10 places higher in the usage table than the value table).  This of course does not mean that Blogs have no value; it could men that the way they are being used is not adding the expected value (see my post about the “director’s blog“.

We saw very similar results for this question between the 2017 and 2014 surveys, with some minor changes. Those technologies which most increased in use between 2014 and 2017 were Microblogs and Video publication, and not surprisingly these have also seen the greatest increase in reported value delivery as well. The technology which decreased in use the most over the 3 year period is the innovation funnel technology (capturing and filtering improvement suggestions).

View Original Source (nickmilton.com) Here.

Should you use a single technology platform for KM?

Does KM need a single technology platform? More likely it needs several technologies.

This blog post was prompted by a thread in Stan Garfield’s SIKM community asking what technology platform people use for KM. My immediate thought was that a single platform probably is not sufficient. However let’s look at our survey results to see what people actually use in real life.

Firstly, let’s look at how many tools people use for KM. 

In our global KM survey, run in 2014 and again in 2017 and answered by over 700 knowledge managers, people were asked to select, from a list of technologies, which ones they applied as part of their KM program, and if their tool was not on the list, to identify it in an “Other” box. The number of selections (including the Other box) is shown in the pie chart below. This particular question was answered by 270 people.

Please note, from the pie chart below, that 37% of respondents use a single technology platform for KM, while 63% use more than one. 

Secondly, lets look at which technologies people say they use

The table shows which technologies people said they use for KM. Please note that respondents were asked to identify every tool they used, so 63% of respondents identified more than one of these tools.. Please also note 162 people replied “other”, which is the third highest category after SharePoint.

Technology brand
Number of users
SharePoint customised
SharePoint “out of the box”
In-house tools
IBM Social Content Management

The people who replied “other” identified the following additional tools:

.NET Cloud Program – switching to Confluence, 365, Tallyfox, Dartfish tv, 3ms internal , Adobe Defense Connect Online, Microsoft Outlook, Aptify, Basecamp, Bespoke, Black board, BMC Remedy, BMC Remedy, O365 (collaboration), Box, Caltura Video, SAP Jam, SAP Portal, Confluence, Adobe Connect, CISCO WebEx and Jabber, Cloud based server – Dropbox, Controlled Access Folders, Cornerstone, Cornerstone OnDemand, Cosential; Newforma; Fairsail; Ajera, CRM Dynamics, CubicWeb Semantic Web Framework, current still using static tool for Intranets (Dreamweaver)., Customized app, cyn.in, Cynapse Cyn.in, Day Communications Intranet, Decisiv search (Recommind), develop own portal , DMS iManage, do not know, Documentum, Documentum , Documentum, Drop Box for external collaboration until we have a solution, eGain, eGain, ELGG, EMC2 E-View, excel, exo, Filesite Document Management – imanage, HP Content Manager, Pega, Huddle, Huddle; Tibbr, IBM Connections, Inquira, iManage, iManage Worksite DMS, ConceptSearching, Recommind Decisiv Search, Outlook, HighQ, In house developed, Institutional repository – DSpace, Integration within ticket tool (CA USD/Service Now), Internal developed platform, Internal support portal, internal system, Internally developed systems, jabber, webx, asana, JAM, JIRA, Joomla, Joomla, Joomla, Knowledgeplaza, Laserfiche, lenus – library  open access repositary , Liferay, LifeRay, Reverb, Linked-In, listservs, Lithium, many in-house tools, Market Logic Software, MEETSYS / I2Kn, Melling and graphic representation applications, Microsoft CRM, Migrating from Lotus to Sharepoint, MindTouch, ServiceNow and Zendesk, MOODLE, moodle, Ms Dynamics, Newsgator, not public information, Office365, OneNote, open asset, pipedrive, 10,000ft, our DMS, PipelineDeals & The Box, platform by ourself, PLM, Plone, Plone by Google, Podio, QDAMax, Qlikview, qimingkeji, radio     internet, Recommind Decisiv Search, Remedy Knowledge Management, Research Management System, Saba, ScienceDirect, SharePoint customized is coming in 2018, SharePoint has been slightly customised on a look and feel basis, ShoreTel Connect, Joomla!, Worldox, Sitefinity, O365/Lync, Sitrion, Sitrion NewsGator, Sitrion, Kapow, Semaphore, Skype, skype for business, Skype for business, Newforma, KA Synthesis, Social Network interface – cannot remember the name of the provider, Social Sites, Social squared, Socialtext wikis, Software platforms due to be replaced, Soutron, SSRS, Starmind; Exalead, Synthesis, Synthesis, Tailored EPI Server, Tibbr, Tibbr, Tortoise Subversion / svn, Trello, TRIM, Unily, Verint, Vivisimo, vivo, We are retiring SharePoint and moving to Google in 2017, We leverage the O365 suite of tools, so I would also include Sway, OneDrive, Skype, OneNote, and to a lesser degree, Delve and Planner, as key platforms, We use our DMS, iManage as a repository for explicit knowledge, Workplace (facebook at work), WorkSite by iManage, XING Groups Management; mixxt wiki and doc mngt., Yammer free version, Yolean Checksheets, Zendesk

Thirdly, let’s look at the tools used by the 37% of people who use only one technology.

Here SharePoint dominates this category. 62% of people who use a single tool, use SharePoint.

Please note that this does not say whether SharePoint is a good or bad solution, only whether it is commonly used.

Finally lets look at why people use technology in KM in the first place. 

Ideally this should be the place to start – to ask “what functions do we need of a KM technology platform”. As you can see from the graph below, there are many functions.

For many functions, you need many technology platforms. SharePoint, for example, is good at enterprise content management, document publishing, and on collaboratively working on documents. It’s document search can be described as poor, and it is poorer at searching for people. It is OK as a tool for community forums, but gets few recommendations as a wiki-substitute, and you would not use it for video publishing or lessons management.

So it handles documents OK, but probably needs to be supplemented with one or more other KM tools to cover the more tacit areas of knowledge.

In conclusion; 

You probably need your KM technology to provide multiple functions, and therefore you are likely, like 63% of the survey respondents, to use more than one tool. If you choose a single platform, then recognise what its good at, and what its poor at, and be prepared to supplement it where needed.

View Original Source (nickmilton.com) Here.

How the coastguard seeks input to lesson learning

Public organisations can learn from the coastguard when it comes to getting wide scale input to lesson learning

Any public organisation, especially one with an element of high priority service, needs a lesson-learning process to improve that service. The emergency response services in particular have well-developed lesson learning systems, but here is a wrinkle I had not seen before, from the US coastguard.

This article from 2017, entitled “Innovation Program seeks hurricane lessons learned from Coast Guard responders” describes how the US coastguard set up what they called the “Hurricane Lessons Learned challenge” on the Coast Guard’s ideas-capturing portal [email protected].

This portal was started as a way to preserve and institutionalize the wealth of lessons learned during hurricane response efforts, and all Coast Guard personnel who participated in any of the response efforts are encouraged to share their observations, issues and ideas.

This is a means of capturing ideas observations and insights which analysts later could convert into lessons (the sequence from Observations to Insights to Lessons is widely recognised in the Lesson learning community). Some direct lessons may also be captured.

As the article explains

 The Coast Guard routinely captures lessons learned as a way to improve its operations, but the [email protected] challenge offers one distinct advantage: “Our crowdsourcing platform not only provides a place to submit ideas, but also to collaborate on them,” (Cmdr. Thomas “Andy”) Howell said. “Everyone from non-rates to admirals can discuss ideas.” Speed is also an advantage. “Catching the ideas when they’re fresh and raw preserves their integrity,” Howell said.

The US Coastguard are well aware that capturing lessons is not enough for them to be a learnign organisation. These lessons must also drive change.

The Commandant’s Direction says we need to become an organization capable of continuous learning, so it’s important that the innovations and adaptations that made this response successful are institutionalized,” Howell said. Ideas shared through the Hurricane Lessons Learned challenge are immediately shared with the responsible program. Many will be considered as potential projects for next year’s Research, Development, Test and Evaluation Project Portfolio.

The portal has been very well received

“We’ve heard from pilots, inspectors, commanding officers, district command staffs, reservists, Auxiliary personnel – the entire gamut of responders,” Howell said. “It’s a very user-friendly way to collect information, and comes with the benefit of collaboration,” he said.

This is an approach other similar organisations can learn from.

View Original Source (nickmilton.com) Here.

What’s the difference between a lesson-learned database and a lesson management system?

In this blog post I want to contrast two software systems, the Lessons Database, and the Lessons Management System.

There are two types of Lessons Learned approaches, which you could differentiate as “Lessons for Information” and “Lessons for Action”.

These represent maturity levels 1 and 2 from my three level categorisation, and can be described as follows.

Lessons for Information” is where lessons are captured and put in reports, or in a database, in that hope that people will look for them, read them, and assimilate them.

Lessons for Action” is where lessons are used to drive change and improvement. Lessons are captured, reviewed, validated, and action is taken to embed the lessons in process, procedure, standards and/or training.

“Lessons for Information” is supported by a Lessons Database, “Lessons for Action” by a Lessons Management System. Let’s contrast the two.

  • In a Lessons Database, the database is the final home of the lessons. In a Lessons Management System, the final home of lessons is considered to be the compiled knowledge of the organisation, which may be procedures, doctrine, guidance, best practices, wikis, etc.
  • In a Lessons Database, lessons reach their reader through search. In a Lessons Management System, lessons are pro-actively routed to those who need to see them and to take action.
  • In a Lessons Database, lessons accumulate over time (this was the problem with my first Lessons system in the 90s – it got clogged up over time with thousands of lessons, until people stopped looking). In a Lessons Management System, lessons are archived once they have been embedded into process and procedure, and the only live content in the system is the set of lessons currently under review.
  • In a Lesson Database there is only one type of lesson – the published lesson. In a Lesson Management system there are at least two types of lesson – the open lesson (where action has not yet been taken) and the closed lesson, which may then be archived. Some organisations recognize other types, such as the draft lesson (not yet validated) and the Parked lesson (where action cannot yet be taken, or where action is unclear, and where the lesson needs to be revisited in the future).
  • In a Lessons Database, there may be duplicate lessons, out of date lessons, or contradictory lessons. Through the use of a Lessons Management System, these have all been resolved during the incorporation into guidance.
  •  In a Lessons Database, there are limited options for metricating the process. You can measure how many lessons are in the system, but that’s about it (unless you capture data on re-use). Through the use of a Lessons Management System, you can track lessons through to action, and can measure whether they are being embedded into process, you can see where they are being held up, and by whom, and you can see how long the process is taking and where it needs to be speeded up.
Lessons management is the way to go. Lesson databases really do not work in the long term, and usually become lesson graveyards, and the home for Lessons Lost.

View Original Source (nickmilton.com) Here.

5 reasons why Enterprise Search will never be as good as Google

All the time we hear managers saying “we want a search engine as good as Google”. Here are 5 reasons why you can never even get close.

Image from wikimedia commons

 Google is the yardstick for search, and managers seem to want internal enterprise search that works as well and as (apparently) intuitively as google. But there are 5 good reasons why this will never happen (bearing in mind that I am by no means a search specialist).

1) Search engine optimisation – webpages want to be found

Do you have a website? If you do you will be as familiar as I with the deluge of Spam emails offering to optimise my website for Google search. SEO (Search Engine Optimisation) is big business, and the owners of webpages are doing lots of work on Google’s behalf to ensure their pages are indexable and findable and optimised for search.

But who, in an organisation, optimises their documents and sites for internal search? Let me tell you who – Nobody; that’s who.  Unless you are very lucky, few if any people think about the issues of findability when they publish content.

Google is successful in finding sites because those sites want to be found. They are often very keen to be found, because they are trying to sell you something. The search results at the top of Google’s list are often the ones most desperate to be found. Many documents in your enterprise system do not want to be found, often for issues related to confidentiality as described below.

2) The fact that the web is interlinked html pages, whereas your content is usually isolated word documents (if you’re lucky!)

Sometimes it’s not even Word documents – I know organisations that save their critical knowledge in pdf form!

The difference between interlinked web pages and isalted documents is critical. Google can crawl through the web of interlinked sites, can understand the context of a site partly through its links, and can identify authoritative or important sites based on the number of links that point to them. The search engine results at the top of the list are often the ones with the most backlinks.  The components of the page are also obvious to Google – the title, the first level headings, the metadata – and these also are used to understand what the page is about.

Your documents are not linked. Each stands alone. Each has to be searched and indexed separately. There are no backlinks. There is no visible structure to the document, other than to the human eye, and the search engine cannot tell a footnote from a level 1 heading.

3) The hordes of search engine specialists employed by Google.

How many search engine specialists do you employ? None, right? Google employs tens of thousands. That’s one of the reasons their search works better than yours.

This is especially an issue if you are planning to use Semantic search, or to optimise customer search of your knowledge base. In these cases you will need a search engine specialist to build and evolve the ontology, track and improve the search accuracy, and define the synonyms and stop words.  However managers often neglect this aspect, and assume a search-engine is a one-off purchase that will run itself.

4) Google doesn’t do “security levels”

Google assumes everything is available and visible to everyone. It doesn’t do passwords or access restrictions or security levels. It searches everything that is not on the Dark Web.

A lot of your documents are effectively on the dark Web – they are in secure folders on Box, or Dropbox, or SharePoint. I consulted recently to an organisation that had 300 separate databases or document management systems. They had opened about 6 of these for indexing, the rest were effectively “dark” as far as search was concerned.

5) The web doesn’t do version control

Every webpage on the web is the only version. Rather than storing a webpage as version 3.5 and writing version 4.0, you just rewrite and publish the page. Every page on the web is the current version, and is constantly under development. Google only returns one version of the page – the current version.

You don’t treat documents in this way. Very often, unless your document management is very good, you will have multiple versions of the same document stored in different places.  One of the bugbears of enterprise search is that it will often find all these version in your search results.

So the next time your managers ask “Why can’t we have search like Google” – 

you can reply – “Yes, we can, IF

  • You move all content out of documents onto wikis
  • You keep only one version of every document
  • You train all staff in search engine optimisation
  • You hire a team of search engine specialists, and
  • You make all documents open to everyone”.
Then see what they say!

Enterprise search can work, but it will never work like Google.

View Original Source (nickmilton.com) Here.

The risks when an algorithm takes your job

An interesting Forrester blog highlights some of the risks of process automation

NAO Robot
image from wikimedia commons

We live in a world where automation is beginning to impact knowledge work, in the same way that it impacted manual work in the last century.  On the one hand this is great news for organisations, as it can potentially revolutionize the productivity of the knowledge worker. On the other hand it brings risk.

One attractive opportunity is process automation, where a process that a human used to operate can become automated. The rules, heuristics and knowledge applied by the human can be extracted, using various knowledge management techniques, and turned into an algorithm which a computer or robot can use.

So a job like drafting a will, or cooking a meal, or monitoring a refinery, can be automated. Human knowledge is converted into algorithms, and the know-how that a human used to employ is passed to a machine which will reproduce the logic faithfully, tirelessly, without error, and for a fraction of the lifetime cost.

The problem of course is that know-how is not enough, and we also need know-why.  The know-how is great in a predictable environment, but the know-why is needed once you move into uncharted territory.

That’s one of the messages given in this Forrester blog entitled “Ghost In The Machine? No, It’s A Robot, And It Is Here To Help“. The author, Daniel Morneau, is an advisor on the Technology council, and writes in the blog about robotic process automation, the benefits it will bring, and the governance it will need.

He also quotes one Industry leader who identifies a risk he had not anticipated:

“The hard lesson I learned is that once that knowledge is built into the bot and the employee goes out the door, it’s gone forever,” he said…“We captured the process in the code, right? I mean, the bot knows how to follow the process; we’ve just lost the business logic behind it”

When Moreau asked him what he would do differently, having learned this lesson, he replied

“I’d document the business logic where I can (and) I’d find a way to keep the best employees whose roles are being replaced so their deep understanding of the business logic can be available as we continue to support our businesses. I mean, the business function that that system is used for is not going away, and having employees who have a deep understanding of our business is the hardest thing to hire for”.

So that’s an interesting conclusion about the need for human knowledge of business logic.

We may increasingly outsource some of the know-how to the robots, but we still need to retain humans with the know-why. 

View Original Source (nickmilton.com) Here.

What you need to know about social tools and KM

Here is a very interesting article from HBR entitled “What managers need to know about social tools” – thanks to Anshuman Rath for bringing it to my attention.  It’s well worth a complete read.

Image by Codynguyen1116
on wikimedia commons

The article by Paul Leonardi and Tsedal Neeley, from the Nov/Dec issue of HBR last year, looks at the way companies have often introduced social tools – often because “Other companies are, so we should too” or “That’s what you have to do if you want to attract young talent”  – and describe some of the surprising outcomes.

Here are some of the points the article makes, with excerpts in quotes:

  • Use of these tools make it easier to find knowledge, through making it easier to find knowledgeable people.

“The employees who had used the tool became 31% more likely to find coworkers with expertise relevant to meeting job goals. Those employees also became 88% more likely to accurately identify who could put them in contact with the right experts”

  • Millenials are not keen adopters of enterprise social tools.

“Millennials have a difficult time with the notion that “social” tools can be used for “work” purposes (and are)wary of conflating those two worlds; they want to be viewed and treated as grown-ups now. “Friending” the boss is reminiscent of “friending” a parent back in high school—it’s unsettling. And the word “social” signals “informal” and “personal.” As a 23-year-old marketing analyst at a large telecommunications company told us, “You’re on there to connect with your friends. It’s weird to think that your manager would want you to connect with coworkers or that they’d want to connect with you on social media [at work]. I don’t like that.”

  • How people present themselves on internal networks is important to developing trust.

“How coworkers responded to people’s queries or joked around suggested how accessible they were; it helped colleagues gauge what we call “passable trust” (whether somebody is trustworthy enough to share information with). That’s important, because asking people to help solve a problem is an implicit admission that you can’t do it alone”.

  • People learn by lurking (as well as by asking).

“Employees gather direct knowledge when they observe others’ communications about solving problems. Take Reagan, an IT technician at a large atmospheric research lab. She happened to see on her department’s social site that a colleague, Jamie, had sent a message to another technician, Brett, about how to fix a semantic key encryption issue. Reagan said, “I’m so happy I saw that message. Jamie explained it so well that I was able to learn how to do it.”

  • The way social tools add value to the organisation and to the individual is to facilitate knowledge seeking, knowledge awareness, knowledge sharing and problem solving. The authors give many examples mostly of problem-solving, and about finding either knowledge or knowledgeable people. One example saved a million dollars, and i will add that to my collection of quantified value stories tomorrow.

  • The value comes from practice communities. The authors do not make this point explicitly, so perhaps I am suffering from confirmation bias here, but they talk about the “spread of knowledge” that they observed as being within various groups covering practice areas such as marketing, sales, and legal.

The authors finish with a section on how to introduce the tools, namely by making the purpose clear (and the purpose may be social, or it may be related to knowledge seeking and sharing), driving awareness of the tools, defining the rules of conduct, and leading by example.

The article reminds us again that social tools can add huge value to an organisation, but need careful attention and application. Just because Facebook and Twitter are busy in the non-work world, does not mean similar tools operate the same way at work.

View Original Source (nickmilton.com) Here.

A model for KM technology selection

An example from Schlumberger shows us how selecting KM technology should be done.

image from wikimedia commons

At the KMUK conference a few years ago, Alan Boulter introduced us to the Schlumberger approach to selecting Knowledge Management technology. This is a very straightforward contracts to the common “gadget-store pick and mix” approach, and worth repeating.

Firstly, Schlumberger defined exactly what the business needed from their Knowledge Management technology. They divided these needs into 4 groups;

  • Connecting people to solutions
  • Connecting people to information
  • Connecting people to communities of practice
  • Connecting people to people
Secondly, they bought technology which does each required job, and only that job, and does it well.  If no technology was available that did the job well enough, they built it in-house.
Thirdly, they stuck with that technology over time, provided it still did the job well. People were familiar with it, so they stuck with it.
Finally (and this seems so rare nowadays, that I want to emphasise it), if they bought new technology which had optional functionality that duplicated an existing tool, they disabled that functionality. As an example, they brought in SharePoint as an ECM tool, and SharePoint comes with the “MySite” functionality, which can be used to build a people-finder system. Schlumberger had a people-finder system already, and to introduce a second one would be crazy (if you have two systems, how do you know which one to look in?). So they disabled MySite.
Schlumberger have ended up with a suite of ten tools, each perfect for the job, and with no duplicates. Staff know how to find what they need, and which tool to use. Schlumberger are long-term winners of the MAKE awards, and deliver hundreds of million dollars annually through KM.  Their technology selection forms part of their success.

View Original Source (nickmilton.com) Here.

Do you agree with these two KM assumptions?

A recent paper from the Gartner group seems to contain two basic assumptions about knowledge management which I think are worth addressing. See what you think.

The Gartner paper is entitled Automate Knowledge Management With Data Science to Enable the Learning Organization, and contains the following blocks of text:

“The capture of expertise and experiential knowledge diverts experts and skilled professionals away from productive work. Project managers, software engineers, product developers, hiring managers or customer support agents may be asked to document their work; to participate in peer-to-peer communities to capture and share expertise; to work in more open and transparent ways to encourage serendipitous connections and information flows across an organization; or to shadow peers and learn from observation. But for every minute they do this, it is a minute taken away from doing what they are supposed to be doing”.

“Although still relatively immature, and requiring much manual fine-tuning with domain as well as technical expertise, the “body of knowledge” that powers smart machine movers, sages and doers is extracted automatically by analyzing, classifying, labelling and correlating volumes of structured and unstructured data, including free-form text”.

Now these two chunks of text seem to me to be based on two assumptions. Let’s look at these one by one.

The first assumption is that “knowledge management is not real work”. Note how they say “diverts away from productive work – taken away from what they are supposed be doing”.  So they do not see KM as productive, nor do they see it as something a knowledge worker should be doing.

But KM is productive – it may not produce a tangible object which can be sold to a customer, but it produces knowledge which can be used to improve processes or innovate new products in the future, and so adds value to the business. In some cases, such as R&D, knowledge is the only value. In Pharma, for example, the success rate of R&D projects in delivering a successful product is only 2% or 3%, and in every other case knowledge is the only product. And as the development manager at Toyota said (and I paraphrase); “In Toyota NPD our job is not to produce cars but to produce knowledge, and from that knowledge great cars will emerge”. Producing knowledge is an investment in the future; producing products gives value in the immediate term while producing knowledge gives value in the longer term. KM is productive work.

And maybe KM is something that everyone should be doing, or at least contributing to. Who else should contribute if not the knowledge workers? And you cannot say that the job of the engineer is only engineering, the job of the salesperson is only to sell, or the job of the IT coder is only to write code. All of these people have other things to consider – they need to bear finances in mind, and safety, and quality, to name just three. They can’t say “I don’t want anything to be involved in quality or in safety – just let me do my real job”. The real job needs to be done within a number of contexts, and knowledge management is one of them.  Imagine an airline pilot saying “I am not going to take part in this lessons meeting about my recent near miss, because this is a day taken away from what I am supposed to be doing”. That airline pilot would not keep their job for very long, because the aviation industry knows very well the value of knowledge and of knowledge work, and knows that KM is something pilots need to contribute to.

But if there is pressure to balance the demands on the knowledge worker, between their short-term delivery of product and their long-term delivery of knowledge, can smart machines take up the slack?  The answer to this depends very much as to how much knowledge you think lives in the “volumes of structured and unstructured data, including free-form text”.

Personally I don’t think there is much knowledge in there at all.

The vast majority of structured and unstructured data and documents are work products – the outcome of knowledge work, but not containing the knowledge itself. For example:

  • A CAD drawing may show you a design for a product, but does not help you know understand the process of design, nor how best to design the next product;
  • A bid document tells you how a bid was constructed, but does not contain knowledge of why the bid was won or how to improve bid success;
  • A project plan tells you how a project was planned, but contains nothing about project best practices.

Knowledge is not created through work, it is created through reflection on work, and it is captured not in work products, but in knowledge products such as lessons learned, best practices, guidelines and checklists. If those knowledge products are not created by the knowledge workers, because “that would be a minute taken away from doing what they are supposed to be doing” then the machines will have no knowledge to find.

So I don’t think either assumption is valid. I think KM should be part of the job, and part of the expectation for any knowledge worker, and I do not think the machines will find knowledge where no knowledge exists. I think the machines will help greatly, and will enhance the work of the knowledge workers, but not as a replacement for KM activities. Gartner seem to acknowledge this when they say “This is not in order to replace conventional KM techniques but to augment them where automated techniques may be more effective or economically viable”.
Let’s look at where automation helps, let’s embrace that, and let’s not assume this means people can stop doing KM and can leave it all to the machines,

View Original Source (nickmilton.com) Here.