System 1 allows us to use intuition to draw conclusions. Kahneman introduces two shapes that can either look like the letter “B” or the number “13” depending on the surrounding context. He also describes a sentence, “Ann approached the bank,” which can change associations based on whether an earlier sentence has to do with money or with rivers. In each case, a definite choice was made in our minds, and people are often unaware of the ambiguity of the shape or the sentence.
Kahneman’s B/13 example, and the Ann example, demonstrate not only how we automatically (and unknowingly) process things, but also how we immediately move to construct stories and patterns from previously encountered contexts.
Psychologist Daniel Gilbert argues that to understand a statement, one must first attempt to believe it, and then consider whether it is untrue. Even a nonsensical statement, like “whitefish eat candy,” will initially evoke belief until it is proven false. If people are forced to hold digits in their brain and simultaneously determine whether statements are true, they will often believe many false statements. The conclusion: when System 2 is otherwise engaged, we believe almost anything.
Like the earlier example of the woman in the gorilla suit earlier, when our minds are preoccupied with other tasks it can be difficult to process and evaluate information. And because, as Gilbert proves here, we tend to err on the side of belief versus disbelief, we can make mistakes about the most basic information.
This concept contributes to a general confirmation bias. Asking “Is Sam friendly?” will lead to a different thought process than “Is Sam unfriendly?” We automatically look for confirming evidence. This is contrary to the rules of science, which advises testing hypotheses by trying to refute them.
The “halo effect” is an outgrowth of confirmation bias—it is the tendency to like (or dislike) everything about a person, including things that we might not have observed. The halo effect is one of the ways that System 1 generates a simpler representation of the world than actually exists.
The halo effect serves as a confirmation bias because new things that we learn about a person are affected by the beliefs we already hold about that person; we simplify their traits instead of seeing them as complex.
Kahneman describes a scenario in which one might meet a woman named Joan at a party. Joan is personable and easy to talk to. If her name comes up as a possible donor for a charity, we retrieve good feelings about her and think that she is likely to be generous (a relatively baseless assumption). And now that we believe she is likely to be generous, we like her even more.
In the Joan example, the halo effect is a particularly insidious kind of fallibility, because our errors compound. When we like a person, we tend to attribute more positive traits to them (even if those attributions are not based on concrete evidence), which in turn adds to our good feelings toward that person.
Another psychologist, Solomon Asch, presented descriptions of two people and asked for comments about their personality. The descriptions included the exact same words, but for Alan, the description began “intelligent—industrious—impulsive,” and ended “critical—stubborn—envious.” For Ben, the words were listed in the opposite order. People are much more likely to have a positive view of Alan because the initial traits were positive, but for Ben the initial traits were negative. We view the second set of words in the context of the first.
Asch’s experiment demonstrates just how strongly our first impressions of a person (guided by System 1) can affect our view of that person going forward. Even given a string of six words about two fictional people, we tend to weight the earlier characteristics we are given more than the later characteristics.
Kahneman describes the halo effect he himself experienced when grading students’ exams. He would often be biased by their first essay score. When he started to score the tests blind, working one essay at a time before moving on to the next, he found that his new grading system made it more difficult to give a coherent score (because the essay grades varied wildly), but was less biased. He no longer experienced cognitive ease.
Kahneman gives a personal example in order to demonstrate how this halo effect can change our views of people and their personalities, but also have consequences on how we evaluate their work—something we might hope to be more subjective about.
A procedure to tame the halo effect is by using the opinions of many people, called “decorrelate error.” If people are forced to guess the number of pennies in a jar, their individual estimates will be relatively poor, but the average of a group of estimates tends to be quite accurate. The only caveat is that their estimates must be independent; they cannot be allowed to affect each other. Organizations can learn from this: open discussions often give too much weight to the opinions of those who speak early and assertively.
Kahneman then gives a real-world example of how to overcome this confirmation bias, and also the overconfidence of a few individuals. In group discussions, confidence is often viewed as reassuring, but in fact the separate opinions of many people tend to create more successful calculations and plans.
Kahneman next introduces a principle, which he terms “What You See Is All There Is” (WYSIATI). If we are asked whether a person will be a good leader and are told first that they are intelligent and strong, we automatically assume that they will be, even though the description might go on to say that the person is also corrupt and cruel. We jump to conclusions based on the information available to us.
WYSIATI is a major element of both the overconfidence and laziness of the brain. Our System 1 automatically takes the information available and uses it for the basis of our assumptions, often without consulting System 2 to see if there is outside information that might be useful to the question as well.
In an experiment constructed by Kahneman’s long-time scientific partner Amos Tversky, people were presented with a legal scenario. Some people heard the defense, others heard the prosecution, and some heard both sides. The participants were aware of the setup and could have generated the argument for the other side. Nevertheless, the presentation of one-sided evidence strongly affected judgments, and people who only saw one side where far more confident in their judgments than people who saw both.
Even though being presented with both sides of the case leads to a fairer and more informed process, people are less confident in their judgments when they hear both sides. This is dangerous, because even though we prefer information to be simple, when it comes to court determinations, it is important to have all of the information.
WYSIATI implies that neither the quality nor the quantity of the evidence counts for much. The confidence that people have in their beliefs is based on the quality of the story they can tell about what they see. We often fail to allow for “the possibility that evidence that should be critical to our judgment is missing.”
Here, Kahneman also relates WYSIATI and overconfidence to our tendency to create stories. The more that we can make a coherent narrative out of evidence presented to us, the more confident we are in our conclusions.
WYSIATI also accounts for framing effects. The statement that “the odds of survival one month after surgery are 90%” is more reassuring than “mortality within one month of surgery is 10%,” and people’s decisions about whether to go through with surgery can be affected by the framing that they see.
In a situation like this, there is no “correct” answer as to whether someone should undergo surgery. But the book shows that is important to be able to make a decision without being biased by how the information is presented (although, as Kahneman explains in later chapters, in a situation like this people are likely to be risk averse).
Lastly, WYSIATI accounts for what Kahneman calls “base-rate neglect.” Kahneman briefly describes a fictional man named Steve in the introduction and reintroduces him here. Steve is “a meek and tidy soul” who has a “need for order and structure.” If people are asked if it is more likely for Steve to be a librarian or a farmer, people will say a librarian, even though there are about twenty male farmers for every male librarian. The statistical facts did not come into mind, only the description. What we see is all there is.
Base-rate neglect not only reveals our overconfidence in the information with which we have been presented, but also illuminates our inherent preference for stories over statistics (like the Linda example later). The description fits our stereotype of a librarian, so we assume he is a librarian despite the statistical improbability.