The normal state of our minds is to have intuitive feelings about almost everything, often having answers to questions that we do not completely understand and relying on evidence that we cannot explain. When we are faced with difficult questions that we cannot answer quickly, we substitute that question with an easier one.
This chapter explores how System 1’s laziness causes us to rely on our intuition when answering complex questions. Yet, as Kahneman goes on to show, this simplification leaves a lot of room for error.
Kahneman presents a set of difficult questions and the easier questions we often substitute for them. “How much would you contribute to save an endangered species?” is replaced with “How much emotion do I feel when I think of dying dolphins?” “How happy are you with your life these days?” is replaced with “What is my mood right now?” “How popular will the president be six months from now?” is replaced with “How popular is the president right now?”
Without System 1’s ability to simplify, our brains would have to retrieve a lot more information to answer something like “How happy are you with your life?” or perform a lot more calculations to predict the how popular the president might be in six months.
The mental shotgun makes it easy to generate quick answers to difficult questions. We also couple this with System 1’s ability to compare across different dimensions. In the example question about the dolphins, we find the intensity of our emotions about dolphins and pick a financial contribution that matches that feeling.
This example partly explains the earlier example about how much people would contribute to save birds from drowning. The number of animals is irrelevant, because we match our emotional response to the situation with a corresponding financial contribution.
Kahneman includes another visual illusion: three men walking down a road. Due to the perspective of the image, it appears that the man on the right is much bigger and the man on the left is much smaller, but in reality they are the same size. When asked if the figure on the right is taller than the figure on the left, System 1 actually answers the question “how tall are the three people?” and uses the cues that make the image look three-dimensional to determine that the man on the right is very tall and the man on the left is short.
Again, Kahneman demonstrates the pitfalls of automatic processing with a visual illusion before moving on to a cognitive one that demonstrates the same effect. Here we use the visual cues in the image to determine the height of the men in comparison with their surroundings, rather than directly comparing the two.
The question, “How happy are you with your life these days?” came from a survey of German students. They were asked this question, and then asked how many dates they had last month. Their answers to these questions were uncorrelated. But another group saw the two questions in reverse order. This time there were huge correlations between number of dates and happiness.
When the students are first asked about happiness and then about dates, they take in different factors. But when asked first, the question about dates primes the students and affects their thoughts about their happiness.
What happens with these students is the same as what happens with the visual illusion. They do not want to spend time on precise calculations, and so they substituted the question with one for which they had already calculated their answer. This is also an example of WYSIATI. The present state of mind looms very large when people evaluate happiness.
These different answers show just how easy it is to manipulate System 1, as it relies on present evidence and seeks to expend as little energy as possible in making calculations. In this example, it appears to avoid expending energy altogether. The intuitive answer draws on the answer the students have already given.
Particularly when emotions are involved, people often use their preexisting beliefs to come to conclusions, rather than considering new arguments. Psychologist Paul Slovic has proposed an “affect heuristic,” in which people let their likes and dislikes determine their beliefs about the world. If we like the current health policy, we believe its benefits are substantial and its costs more manageable than alternatives. In this way, System 2 becomes “an apologist for the emotions of System 1.” It searches for information and arguments that are consistent with existing beliefs.
The affect heuristic serves as a kind of confirmation bias. We work to integrate new information into the beliefs that we already hold. Even though we are using our System 2 processing, System 2 is greatly affected by the impressions and associations formed by System 1 and works to justify those impressions when presented with new information, which is why Kahneman describes it as an “apologist.”
Kahneman concludes Part 1 by summing up the features and activities attributed to System 1 that he has introduced: generating impressions, operating automatically, creating patterns of ideas, inferring causes, exaggerating consistency (the halo effect), focusing on existing evidence (WYSIATI), matching intensities across scales (e.g., size to loudness), computing more than intended, substituting easy questions for hard ones.
As the first part of the book concludes, Kahneman recaps some of System 1’s most important aspects. In reviewing this information as a whole, one can see that Kahneman has proven how System 1 is prone to laziness (and subsequently to error) through each of these concepts.