In Malcolm Gladwell’s book Blink: The Power of Thinking Without Thinking, he writes about “thin-slicing”, which he describes as an ability to make very quick “snap” decisions without a lot of detailed information. He argues that this ability to make intuitive judgments is developed over time with experience and training. But an article called Banish Your Biggest Problem-Solving Bias by Micah May in The Journal of Problem Solving put out by McKinsey brought up an interesting point.
One the problems with this “gut level” approach to making decisions is that we are all hindered by limitations on what we know. By seeing the world in a specific way, we can often succumb to a confirmation bias: seeing the world in a way that confirms our own existing beliefs.
Daniel Kahneman, who won the 2002 Nobel Prize in economics confirmed this (see his very stimulating 2002 Nobel lecture here). He found that our intuition about an object is directly related to how clearly we see the object. His example is that we decide that we are closer to a tree when we can see the individual leaves and further away when we can only make out lumps and branches. What this means is that the moment our vision is impaired, our brains ability to make estimates is compromised because we start filling in the gaps.
No wonder the risk of accidents during nighttime driving (9pm – 6am) is 4.6 times higher than the daytime rate. It’s not just that it’s dark, but that we are making decisions on what we think we know (ie: I have enough room to pass, or I have enough time to slow down), rather than what we are actually seeing. Our minds in a sense are on a type of autopilot.
The challenge for organizations is to overcome this bias. Part of this can be accomplished with firm use of logic. Another way of dealing with this is simply to make sure that a diverse enough group of people are involved in the assumptions building phase of a project to ensure that each point is challenged by multiple perspectives.
Micah also references some interesting work by Richards Heuer who spent 45 years at the Central Intelligence Agency (CIA) and is most known for his work on Analysis of Competing Hypotheses, which is now available as a free online tool. Heuer’s take is that, “In various post-mortem analyses, the tendency to see what you expect to see was the basic cause of most intelligence failures. Confirmation bias is the current term for that phenomenon.”
And here is what Micah shares about how the CIA deals with it.
Simply write down all of the assumptions you can think of that underlie your hypothesis. Then ask: How do you know each to be true? Can you think of a plausible scenario under which any of these assumptions is not true? What would happen to the overall hypothesis if any assumption is overturned?
Analysis of competing hypotheses (ACH):
Generate various hypotheses, and put your best hypotheses in columns. Then, in rows, write the evidence that you know. In each cell, indicate whether the evidence is consistent or inconsistent with the hypotheses or neutral to it. Making it possible to find information that is inconsistent with the hypotheses and allowing you to eliminate one or more of the options is what makes ACH and important and powerful technique.
Before delivering a recommendation, the team should assume that it is embarrassingly wrong and try to assess why. Ideally, someone who did not conduct the analysis should assist in the pre-mortem. With fresh eyes, this person should ask, “Could any of our key assumptions have been untrue? Is there room to doubt any of the information used to generate the recommendation? Where are the information gaps?” Looking for these answers will help the team identify important weaknesses in the recommendation.
With organizations becoming more complex, there is an stronger reliance on what is already known simply because of the additional time pressure. By using tools similar to ACH, organizations can work towards ensuring that confirmation bias is addressed before decisions become costly to reverse.