The curse of knowledge (video)

When we have a lot of knowledge, we underestimate how hard it is to communicate this to people who don’t know.  This is called the “Curse of Knowledge” – a cognitive bias that leads to people trying to convey knowledge in bullet points, or in fuzzy statements which are meaningless to others, or by writing knowledge assets which are incomprehensible to the unknowledgeable reader.

The video below by Jeff Walker, the Sales guru, illustrates this cognitive bias in more detail. As the YouTube caption says –

Ever have an “expert” try to explain something to you, only to be left more confused than when you started? They’d forgotten how to be a beginner… and lost most of the ability to teach along the way… here’s how to not make the same mistake yourself.

The video is aimed at sales staff, for whom the curse of knowledge is just as much a barrier to communication as it is in Knowledge Management, but the message is the same –

You cannot communicate knowledge properly unless you account for the Curse of Knowledge. 

View Original Source (nickmilton.com) Here.

Watch conformity bias in action

I blogged yesterday about groupthink. Would you like to see this in action?

The video below is a startling example of how people will agree with a group even when they know the answer is wrong. This is known as Conformity Bias, the our tendency to take cues from the actions or judgments of others rather than exercise our own independent judgment. Conformity bias is a major enemy of Knowledge Management, as it means that group “knowledge” can remain unchallenged.

The video is of the “Asch conformity experiment” which placed a subject in a group of people primed to give obviously wrong answers. Much of the time, the subject agrees with the group even though he knows the answer is wrong, and you can see the consternation on his face has he does this (1 min 31) or the resignation as he “tells a lie” to fit in (2 min 12).

37% of the time the subject will give the wrong answer to fit in with the group, but this falls to 5% if there is a “partner” in the group who gives the right answer, thus destroying the group unanimity, and also if the subject is allowed to write their answer rather than say it.

So what’s the lesson for Knowledge Management?

If you are facilitating a group KM session where knowledge is being exchanged, and they all seem to agree but you can see consternation, or discomfort, on some of the faces, then maybe there is groupthink at work. In this more extensive post on conformity bias I suggest some of the things you can do:

  1. Ask people to write down views then read them out, rather than “speaking out around the table”
  2. Press for the dissenting voice – “does anyone thing differently here? What about you Susan – you were looking concerned, do you have an alternative view?”
  3. Ask “who wants to be the devil’s advocate here?
  4. Make dissent safe, both in groups and online
  5. Avoid using social responses as a form of crowdsourcing, unless you do this very carefully, as everyone is likely to agree with the first confident view. 

For a more modern version of this video, without the 1970s fashion sense, see here

View Original Source (nickmilton.com) Here.

Does KM need an official Devil’s Advocate role?

KM is beset by cognitive biases such as Groupthink. Maybe the Devil’s Advocate role is needed to help combat this?

The biggest impediments to learning in an organisation are mental impediments, driven by cognitive biases.  These include the confirmation bias (where we only accept evidence that confirms what we think), and GroupThink; aka conformity bias (where the desire for harmony or conformity leads group members to minimize conflict and reach a consensus decision without critical evaluation of alternative viewpoints).

If you combine these two, you end up with a powerful immovable force, whereby a group becomes entrenched in their thinking.  People inside the knowledge bubble are convinced they are correct, and immune to learning or to new knowledge that contradicts what they think. They cannot learn. They are stuck.  This results in a Knowledge Bubble –  the classic example being the Bush Administration who, convinced that Saddam Hussein was the primary threat, refused to countenance warnings about Osama Bin laden.

But if Group-think is such a potent threat to learning, and thus to KM, whose job is it to prick the Knowledge Bubbles?

This interesting post from Tech Crunch called “The VP of Devil’s Advocacy” might just have the answer.

One solution (and here the Tech Cruch quotes from the movie World War Z) is

The tenth man. If nine of us look at the same information and arrive at the exact same conclusion, it’s the duty of the tenth man to disagree. No matter how improbable it may seem, the tenth man has to start thinking with the assumption that the other nine are wrong”.

The original scene is below.

This is an illustration from Hollywood, but it is based on a real group – the Devils Advocates Office in Israeli intelligence – described here as follows

The devils advocates office ensures intelligence assessments are creative and do not fall prey to group think. The office regularly criticises products coming from the analysis and production divisions, and writes opinion papers that counter these department’s assessments. The staff in the devils advocate office is made up of extremely experienced and talented officers who are known to have a creative “out of the box” way of thinking”.

The Devils Advocates Office is an excellent and systematic defence against the perils of group-think.

An alternative approach, taken by many project management organisations, is what they call “The Black Hat review” – a destructive review questioning the assumptions underlying a proposal or a planned project. Often the Project Management Office takes this Black Hat role, which can counter the wishful thinking that besets many projects.

In sports, Bill Simmons calls this role “The VP of common sense”

I’m becoming more and more convinced that every professional sports team needs to hire a Vice President of Common Sense, someone who cracks the inner circle of the decision-making process along with the GM, assistant GM, head scout, head coach, owner and whomever else. One catch: the VP of CS doesn’t attend meetings, scout prospects, watch any film or listen to any inside information or opinions; he lives the life of a common fan. They just bring him in when they’re ready to make a big decision, lay everything out and wait for his unbiased reaction.

When you think about some of the crazy decisions taken by companies, and the even crazier ones taken by governments, it makes you think that this sort of systematic challenge should be institutionalised more often.

Perhaps more organisations should have a VP of Devils Advocacy, a Chief Black Hat, or a VP of common sense, to act as “The Tenth Man”

Someone whose role and accountability is to be the Chief Pricker of the Knowledge Bubbles.

View Original Source (nickmilton.com) Here.

The human bias behind group-think

There is a real human bias that drives us to agree with each other, which can drive group think and false consensus 

The power of social proof climbs rapidly
with the number of people involved.
From the Solomon Asch study

Why are “canned laughter” tracks so common on TV comedies?  We all hate them, we know they are false, and yet they keep putting them on the soundtrack.  The reason is that canned laughter is a form of Social Proof, and social proof is a massive factor in the way we think and behave.

Social Proof is the name for the assumption that “if everyone else thinks so, it must be correct”. Canned Laughter is a subtle form of social proof; and it works – people judge comedy shows as funnier if there is canned laughter. Even though we know its false, we instinctively think “they are all laughing, so it’s funny”. The TV executives know we think this way, which is why canned laughter is so endemic.

The Solomon Asch study shows an even more radical form of social proof – how up to 74% of people (as part of a secret experiment) would say something they know is wrong, just to agree with everyone else. Asch concluded that it is difficult to maintain that you know something when everyone else knows the opposite. The group pressure (“Social Proof”) implied by the expressed opinion of other people can lead to modification and distortion, effectively making you agree with almost anything.

The risk in Knowledge Management is very clear.

Consensus in a group may mean that everyone agrees because they all independently think the answer is correct, or it may mean that they all agree because everyone else agrees. This is particularly the case when the first person to speak is very confident; everyone else is likely to follow, and so social proof builds up (I talk about this in my blog post on the illusion of confidence, and point out that confidence is often an function of ignorance, especially ignorance in small groups. Real experts are rarely dogmatic).

I saw this for myself in a meeting in Sweden (a country where consensus is particularly valued). I asked everyone to judge the success of a project from their own perspective, to mark the level of success out of 10, and to write that number down in front of them. I was looking for outliers, and I could see that the person next to me had written a 6. We went round the table, the first person said “8 out of 10”, and the marks followed – 8,8,8,8. We got to the person who had written down 6, and she said “8” as well.

Social proof is such a well known phenomenon now that it is widely used by marketers to convince us to buy things, and it can be a powerful took when marketing KM in an organisation. However when we are identifying knowledge, discussing knowledge, or trying to determine from a group what actually happened and why, then social proof can drive group-think and distort the truth.

In knowledge management we are not interested in consensus, we are not interested in knowledge as something to sell to others, and we are interested in truth, or as close to the truth as we can get. Social proof is not real proof, and just because everyone agrees with a statement, does not mean they all believe it to be correct.

So how do we avoid conformance and groupthink driven by social proof in KM?

1) When looking for individual objective input, we must avoid “speaking out around the table.” In Sweden I could have collected votes on post-it notes, or I could have said clearly “read out what you have written, even if its not what everyone else said”.

2) As facilitators of KM processes, we must always ask for the dissenting voice. “Does anyone disagree with this interpretation? Might there be other views here? What are the alternatives? Susan, you are looking concerned, do you have another view?”

3) As online facilitators, we must make dissent safe. I recall one community of practice where, in the first year, social proof was very strong. If anyone disagreed with the first post on a conversation they would not disagree online, but would reply privately. It took a lot of work from the facilitator to reverse this trend, and to develop a community where dissent was welcomed as being part of the search for the truth.

4) We must be careful to avoid using social responses as a form of crowdsourcing. Crowdsourcing works either with an expert crowd willing to share dissenting voices, or with a knowledgeable crowd able to contribute independently. It doesn’t work with a small uncertain crowd building on each other’s opinions, as that way you can end up with false agreement through social proof.

Social proof is real, groupthink is powerful, and it is one of the many human biases we need to beware of in KM. 

View Original Source (nickmilton.com) Here.

The Gorilla illusions and the illusion of memory

Here is a reprise from the archives – a post primarily about the illusion of memory. The story here from Chabris and Simons raises some disturbing issues about the trustworthiness of tacit knowledge over a long timescale.


Gorilla 2

Originally uploaded by nailbender

I have just finished reading The Invisible Gorilla, by Christopher Chabris and Daniel Simons (an extremely interesting book). These are the guys who set up the famous “invisible gorilla” experiment (if you don’t know it, go here). The subtitle of the book is “ways our intuition deceives us”, and the authors talk about a number of human traits – they call them illusions –  which we need to be aware of in Knowledge Management, as each of them can affect the reliability and effectiveness of Knowledge Transfer.

The illusions which have most impact on KM are

 I would like to address these three illusions in a series of blog posts, as its a bit much to fit into a single one.

The illusion of memory has massive impact in KM terms, as it affects the reliability of any tacit knowledge that is held in human memory alone.

I have already posted about the weakness of the human brain as a long-term knowledge store. Chabris and Simons give some graphic examples of this, pointing our how even the most vivid memories can be completely unreliable. They describe how one person had a complete memory of meeting Patrick Stewart (Captain Picard of Star Trek) in a restaurant, which turned out not to have happened to him at all, but to be a story he has heard and incorporated into his own memory. They talk about two people with wildly differing memories of a traumatic event, which both turn out to be false when a videotape of the event is finally found. And they give this story of a university experiment into the reliability of memory.

 On the morning of January 28, 1986, the space shuttle Challenger exploded shortly after takeoff. The very next morning, psychologists Ulric Neisser and Nicole Harsch asked a class of Emory University undergraduates to write a description of how they heard about the explosion, and then to answer a set of detailed questions about the disaster: what time they heard about it, what they were doing, who told them, who else was there, how they felt about it, and so on.

Two and a half years later, Neisser and Harsch asked the same stu­dents to fill out a similar questionnaire about the Challenger explosion. 

The memories the students reported had changed dramatically over time, incorporating elements that plausibly fit with how they could have learned about the events, but that never actually happened. For example, one subject reported returning to his dormitory after class and hearing a commotion in the hall. Someone named X told him what happened and he turned on the television to watch replays of the explo­sion. He recalled the time as 11:30 a.m., the place as his dorm, the ac­tivity as returning to his room, and that nobody else was present. Yet the morning after the event, he reported having been told by an ac­quaintance from Switzerland named Y to turn on his TV. He reported that he heard about it at 1:10 p.m., that he worried about how he was going to start his car, and that his friend Z was present. That is, years after the event, some of them remembered hearing about it from differ­ent people, at a different time, and in different company.

Despite all these errors, subjects were strikingly confident in the ac­curacy of their memories years after the event, because their memories were so vivid—the illusion of memory at work again. During a final interview conducted after the subjects completed the questionnaire the second time, Neisser and Harsch showed the subjects their own hand­written answers to the questionnaire from the day after the Challenger explosion. Many were shocked at the discrepancy between their origi­nal reports and their memories of what happened. In fact, when con­fronted with their original reports, rather than suddenly realizing that they had misremembered, they often persisted in believing their current memory.

The authors conclude that those rich details you remember are quite often wrong—but they feel right. A memory can be so strong that even documentary evidence that it never happened doesn’t change what we remember.

The implication for Knowledge Management

The implication for Knowledge Management is that if you will need to re-use tacit knowledge in the future, then you can’t rely on people to remember it accurately. Even after a month, the memory will be unreliable. Details will have been added, details will have been forgotten, the facts will have been rewritten to be closer to “what feels right”. The forgetting curve will have kicked in, and it kicks in quickly.  Tacit knowledge is fine for sharing knowledge on what’s happening now, but for sharing knowledge with people in the future (ie transferring knowledge through time as well as space) then it needs to be written down quickly while memory is still reliable.

We saw the same with our memories of the Bird Island game in the link above. Without a written or photographic record, the tacit memory fades quickly, often retaining enough knowledge to be dangerous, but not enough to be successful. And as the authors say, the illusion of memory can be so strong that the written or photographic record can come as a shock, and can feel wrong, even if it’s right. People may not only refuse to believe the explicit record, they may even edit it to fit their (by now false) memories.

Any KM approach that relies solely on tacit knowledge held in the human memory can therefore be very risky, thanks to the illusion of memory.

View Original Source (nickmilton.com) Here.

The curse of knowledge and the danger of fuzzy statements

Fuzzy statements in lessons learned are very common, and are the result of “the curse of knowledge”

Fuzzy Monster
Clip art courtesy of DailyClipArt.net

I blogged yesterday about Statements of the Blindingly Obvious, and how you often find these in explicit knowledge bases and lessons learned systems, as a by-product of the “curse of knowledge“.

There is a second way in which this curse strikes, and that is what I call “fuzzy statements”.

It’s another example of how somebody writes something down as a way of passing on what they have learned, and writes it in such a way that it is obvious to them what it means, but which carries very little information to the reader.

A fuzzy statement is an unqualified adjective, for example

  • Set up a small, well qualified team…(How small? 2 people? 20 people? How well qualified? University professors? Company experts? Graduates?)
  • Start the study early….(How early? Day 1 of the project? Day 10? After the scope has been defined?)
  • A tighter approach to quality is needed…. (Tighter than what? How tight should it be?)
You can see, in each case, the writer has something to say about team size, schedule or quality, but hasn’t really said enough for the reader to understand what to do, other than in a generic “fuzzy” way, using adjectives like “small, well, early, tighter” which need to be quantified.

In each case, the facilitator of the session or the validator of the knowledge base needs to ask additional questions. How small? How well qualified? How early? How tight?

Imagine if I tried to teach you how to bake a particular cake, and told you “Select the right ingredients, put them in a large enough bowl. Make sure the oven is hotter”. You would need to ask more questions in order to be able to understand this recipe.

Again, it comes back to Quality Control.

Any lessons management system or knowledge base suffers from garbage In, Garbage Out, and the unfortunate effect of the Curse of Knowledge is that people’s first attempt to communicate knowledge is often, as far as the reader is concerned, useless garbage.

Apply quality control to your lessons and de-fuzz the statements

View Original Source (nickmilton.com) Here.

How to curb overconfidence by considering the unknowns

Overconfidence is one of the most powerful cognitive biases that affects KM. Here is how to address it.

Cognitive biases are the plague of Knowledge Management. They cause people to neglect evidence, to fail to notice things, to reinvent their memory, and to be overconfident about their own knowledge.

Overconfidence in particular is an enemy of learning. People are more willing to accept knowledge from a confident person, but confidence is more often linked to a lack of knowledge – the “Dunning-Kruger effect“. Overconfidence leads to wishful thinking, which leads to ignoring knowledge from others, and is one of the primary causes of project cost and time overruns.

Overconfidence is therefore what happens when you don’t know what you don’t know, and a recent Insead study shows that overconfidence can be significantly reduced just by considering your lack of knowledge. In this study they gave people general knowledge questions, and found (as is often the case) that people were overconfident about their answer (You can take a similar test, to test your own level of overconfidence). Then they tried again with two groups of people – with the first group they asked the people to list a couple of missing pieces of knowledge which would help them guess the answer better, and with the second group they asked them to consider reasons why their choice might be wrong (a “devil’s advocate” approach).

The paper contains a very clear graph which shows that the approach of “considering the unknowns” has a major impact on overconfidence, while the devils advocate approach is far less powerful. The report concludes:

In our view, overconfidence often arises when people neglect to consider the information they lack. Our suggestion for managers is simple. When judging the likelihood of an event, take a pen and paper and ask yourself: “What is it that I don’t know?” Even if you don’t write out a list, the mere act of mulling the unknowns can be useful. And too few people do it. Often, they are afraid to appear ignorant and to be penalised for it. But any organisation that allows managerial overconfidence to run amok can expect to pay a hefty price, sooner or later.

In Knowledge Management, we have a simple and powerful process that allows exactly this process of  “Considering the unknowns”. This is the Knowledge Gap Analysis, or its more elaborate version for larger projects – the Knowledge Management Plan. Both of these processes require a team to list the things they do not know (thus reducing overconfidence) and then set up learning actions to acquire the knowledge (thus reducing the number of unknowns).

These are two of many KM techniques that can help address cognitive bias.

View Original Source (nickmilton.com) Here.

Why winners don’t learn (the winner’s curse)

Teams and individuals who are winning, are often the poorest at learning – a particular form of “winner’s curse”.

Who learned more about Tank Warfare from World War One? Was it the victorious Americans, British and French, or the losing Germans?

It was, of course, the Germans.

The story below is taken from a review of a book by Max Boot.

“The British military and government, before Churchill became Prime Minister, lost interest in tanks. In France, Captain Charles de Gaulle was interested in fast-moving mechanized warfare, but the French military favored defensive warfare and firepower.  The United States also devoted little interest in armored warfare. Writes Boot:

“The U.S. had deployed a Tank Corps in World War I, but it was disbanded in 1920 over the anguished objections of two of its leading officers — Colonel George S. Patton and Major Dwight D. Eisenhower.

“It was the Germans who were most interested in fast-moving mechanized warfare. Writes Boot:

“Around 1934, Colonel Heinz Guderian, chief of staff of the Inspectorate of Motorized Troops, gave the Fuehrer [Adolf Hitler] a short tour d’horizon of tank warfare. “Hitler,” Guderian wrote, “was much impressed by the speed and precision of movement of our units, and said repeatedly, “that’s what I need! That’s what I want!'”

“In 1939 Hitler had a three-hour parade of mechanized forces. Fuller was there, invited because of his fascist sympathies. Hitler said to him, “I hope you were pleased with your children.” Fuller replied:

“Your Excellency, they have grown up so quickly that I no longer recognize   them”. 

The Winners’ curse is that the winner often fails to learn, and so is overtaken in the next competition by the loser. That’s why Germany overtook the Allied powers in terms of tank warfare in 1939, and the loser became winner for a while.  Winners are complacent, and reluctant to change. Losers are eager not to lose again.

We often see this “Winner’s Curse” in our Bird Island KM exercises, where the team that builds the tallest initial tower seems to learn the least from the others (and often from the Knowledge Asset as well).  Very often they are not the winning team at the end of the exercise.

The very fact that a team is ahead in the race, means that they have less incentive to learn. So the team with the tallest tower “relaxes” a bit. The best learners are often the teams with the second-tallest tower, as they know that with a little bit of learning effort, they can be in the lead. Also there seems to be a tendency to learn more readily from failure, than from success.

The story of the Wright Brothers is another example – having developed the first effective aeroplane, they failed to learn and optimise their design, and were eventually outcompeted. Their design became obsolete and the Wright Brithers went out of business.

Beware of the Winner’s Curse in your KM programs. Ensure the winning teams also continue to learn. Capture lessons from successes and failures, and encourage even the winners to keep pushing to do even better.  Learning from failure is psychologically easier, but learning from success allows success to be repeated and improved.

Learning from success is very difficult, but it is the most powerful learning you can do.

View Original Source Here.

Tacit Knowledge and cognitive bias

Is that really Tacit Knowledge in your head, or is it just the Stories you like to tell yourself?

IMAGINATION by archanN on wikimedia commons

All Knowledge Managers know about the difference between tacit knowledge and explicit knowledge, and the difference between the undocumented knowledge you hold in your head, and documented knowledge which can be shared.  We often assume that the “head knowledge” (whether tacit or explicit) is the Holy Grail of KM; richer, more nuanced, more contextual and more actionable than the documented knowledge.

However the more I read about (and experience) cognitive bias and the failures of memory, the more suspicious I become of what we hold in our heads.

These biases and failures are tendencies to think in certain ways that can lead to systematic deviations from good judgement, and to remember (and forget) selectively and not always in accordance with reality. We all create, to a greater or lesser extent, our own internal “subjective social reality” from our selective and flawed perception and memory.

Cognitive and memory biases include

  • Confirmation bias, which leads us to take on new “knowledge” only when it confirms what we already think
  • Gamblers fallacy, which leads us to think that the most recent events are the more important 
  • Post-investment rationalisation, which leads us to think that any costly decisions we made in the past must have been correct
  • Sunk-cost fallacy, which makes us more willing to pour money into failed big projects than into failed small projects
  • Observational selection bias, which leads us to think that things we notice are more common that they are (like when you buy a yellow car, and suddenly notice how common yellow cars are)
  • Attention bias, where there are some things we just don’t notice (see the Gorilla Illusions)
  • Memory transience, which is the way we forget details very quickly, and then “fill them in” based on what we think should have happened
  • Misattribution, where we remember things that are wrong
  • Suggestibility, which is where we create false memories
So some of those things in your head that you “Know” may not be knowledge at all. Some may be opinions which you have reinforced selectively, or memories you have re-adjusted to fit what you would have liked to happen, or suggestions from elsewhere that feel like memories. Some of them may be more like a story you tell yourself, and less like knowledge.

Do these biases really affect tacit knowledge? 

Yes they really do, and they can affect the decisions we make on the basis of that knowledge.  Chapter 10 of the 2015 World development Report, for example, looks at cognitive biases among development professionals, and makes for interesting reading.

While you would expect experts in the World Bank to hold a reliable store of tacit knowledge about investment to alleviate poverty, in fact these experts are as prone to cognitive bias as the rest of us. Particularly telling, for me, was the graph that compared what the experts predicted poor people would think, against the actual views of the poor themselves. 

The report identifies and examines 4 “decision traps” that affect the development professionals and influence the judgements that they make:

  • the use of shortcuts (heuristics) in the face of complexity; 
  • confirmation bias and motivated reasoning; 
  • sunk cost bias; and 
  • the effects of context and the social environment on group decision making.
And if the professionals of the World Bank are subject to such traps and biases, then there is no guarantee that the rest of us are any different.

So what is the implication?

The implication of this study, and many others, is that one person’s “tacit knowledge” may be unreliable, or at best a mish-mash of knowledge, opinion, bias and falsehood. As Knowledge Managers, there are a number of things we can do to counter this risk.

  1. We can test Individual Knowledge against the knowledge of the Community of Practice. The World Bank chapter suggests that “group deliberation among people who disagree but who have a common interest in the truth can harness confirmation bias to create “an efficient division of cognitive labor”. In these settings, people are motivated to produce the best argument for their own positions, as well as to critically evaluate the views of others. There is substantial laboratory evidence that groups make more consistent and rational decisions than individuals and are less “likely to be influenced by biases, cognitive limitations, and social considerations”. When asked to solve complex reasoning tasks, groups succeed 80 percent of the time, compared to 10 percent when individuals are asked to solve those tasks on their own. By contrast, efforts to debias people on an individual basis run up against several obstacles (and) when individuals are asked to read studies whose conclusions go against their own views, they find so many flaws and counterarguments that their initial attitudes are sometimes strengthened, not weakened”. Therefore community processes such as Knowledge Exchange and Peer Assist can be ideal ways to counter individual biases.
  2. We can routinely test community knowledge against reality. Routine application of reflection processes such as After Action review and Retrospect require an organisation to continually ask the questions “What was expected to happen” vs “What actually happened”.  With good enough facilitation, and then careful management of the lessons, reality can be a constant self-correction mechanism against group and individual bias.
  3. We can bring in other viewpoints. Peer Assist, for example, can be an excellent corrective to group-think in project teams, bringing in others with potentially very different views. 
  4. We can combine individual memory to create team memory. Term reflection such as Retrospect is more powerful than individual reflection, as the team notices and remembers more things than any individual can.
  5. We can codify knowledge. Poor as codified knowledge is, it acts as an aide memoire, and counteracts the effects of transience, misattribution and suggestibility. 
But maybe the primary thing we can do is to stop seeing individual tacit knowledge as being safe and reliable, and instead start to concentrate on the shared knowledge held within communities of practice.  
Think of knowledge as Collective rather than Individual, and you will be on teh right track.

View Original Source Here.