The Gorilla illusions and the illusion of memory

Here is a reprise from the archives – a post primarily about the illusion of memory. The story here from Chabris and Simons raises some disturbing issues about the trustworthiness of tacit knowledge over a long timescale.


Gorilla 2

Originally uploaded by nailbender

I have just finished reading The Invisible Gorilla, by Christopher Chabris and Daniel Simons (an extremely interesting book). These are the guys who set up the famous “invisible gorilla” experiment (if you don’t know it, go here). The subtitle of the book is “ways our intuition deceives us”, and the authors talk about a number of human traits – they call them illusions –  which we need to be aware of in Knowledge Management, as each of them can affect the reliability and effectiveness of Knowledge Transfer.

The illusions which have most impact on KM are

 I would like to address these three illusions in a series of blog posts, as its a bit much to fit into a single one.

The illusion of memory has massive impact in KM terms, as it affects the reliability of any tacit knowledge that is held in human memory alone.

I have already posted about the weakness of the human brain as a long-term knowledge store. Chabris and Simons give some graphic examples of this, pointing our how even the most vivid memories can be completely unreliable. They describe how one person had a complete memory of meeting Patrick Stewart (Captain Picard of Star Trek) in a restaurant, which turned out not to have happened to him at all, but to be a story he has heard and incorporated into his own memory. They talk about two people with wildly differing memories of a traumatic event, which both turn out to be false when a videotape of the event is finally found. And they give this story of a university experiment into the reliability of memory.

 On the morning of January 28, 1986, the space shuttle Challenger exploded shortly after takeoff. The very next morning, psychologists Ulric Neisser and Nicole Harsch asked a class of Emory University undergraduates to write a description of how they heard about the explosion, and then to answer a set of detailed questions about the disaster: what time they heard about it, what they were doing, who told them, who else was there, how they felt about it, and so on.

Two and a half years later, Neisser and Harsch asked the same stu­dents to fill out a similar questionnaire about the Challenger explosion. 

The memories the students reported had changed dramatically over time, incorporating elements that plausibly fit with how they could have learned about the events, but that never actually happened. For example, one subject reported returning to his dormitory after class and hearing a commotion in the hall. Someone named X told him what happened and he turned on the television to watch replays of the explo­sion. He recalled the time as 11:30 a.m., the place as his dorm, the ac­tivity as returning to his room, and that nobody else was present. Yet the morning after the event, he reported having been told by an ac­quaintance from Switzerland named Y to turn on his TV. He reported that he heard about it at 1:10 p.m., that he worried about how he was going to start his car, and that his friend Z was present. That is, years after the event, some of them remembered hearing about it from differ­ent people, at a different time, and in different company.

Despite all these errors, subjects were strikingly confident in the ac­curacy of their memories years after the event, because their memories were so vivid—the illusion of memory at work again. During a final interview conducted after the subjects completed the questionnaire the second time, Neisser and Harsch showed the subjects their own hand­written answers to the questionnaire from the day after the Challenger explosion. Many were shocked at the discrepancy between their origi­nal reports and their memories of what happened. In fact, when con­fronted with their original reports, rather than suddenly realizing that they had misremembered, they often persisted in believing their current memory.

The authors conclude that those rich details you remember are quite often wrong—but they feel right. A memory can be so strong that even documentary evidence that it never happened doesn’t change what we remember.

The implication for Knowledge Management

The implication for Knowledge Management is that if you will need to re-use tacit knowledge in the future, then you can’t rely on people to remember it accurately. Even after a month, the memory will be unreliable. Details will have been added, details will have been forgotten, the facts will have been rewritten to be closer to “what feels right”. The forgetting curve will have kicked in, and it kicks in quickly.  Tacit knowledge is fine for sharing knowledge on what’s happening now, but for sharing knowledge with people in the future (ie transferring knowledge through time as well as space) then it needs to be written down quickly while memory is still reliable.

We saw the same with our memories of the Bird Island game in the link above. Without a written or photographic record, the tacit memory fades quickly, often retaining enough knowledge to be dangerous, but not enough to be successful. And as the authors say, the illusion of memory can be so strong that the written or photographic record can come as a shock, and can feel wrong, even if it’s right. People may not only refuse to believe the explicit record, they may even edit it to fit their (by now false) memories.

Any KM approach that relies solely on tacit knowledge held in the human memory can therefore be very risky, thanks to the illusion of memory.

View Original Source (nickmilton.com) Here.

Leave a Reply

Your email address will not be published. Required fields are marked *

Shared by: Nick Milton

Tags: , ,