In the face of an uncertain prospect, people will assign weights to different possible outcomes—essentially evaluating how much they believe a certain outcome will occur. The weights—called decision weights—are correlated with the probabilities of these outcomes, and usually people assign these weights automatically and unconsciously.
Kahneman moves on to another important aspect of prospect theory: how we view different gambles and anticipate different outcomes. This happens without our knowledge and can lead to severe biases on which Kahneman elaborates.
In Bernoulli’s theory, gambles were assessed by their expected value—the average of each outcome, weighted by the probability of that outcome. But this theory does not reflect reality. Kahneman gives four examples of probability changes: 1) From 0 to 5%, 2) From 5 to 10%, 3) From 60 to 65%, 4) From 95 to 100%. In each case, one’s chances of receiving $1 million improves by 5%, but everyone agrees that option 1 and option 4 are psychologically more affecting.
As Kahneman has demonstrated throughout the prior chapter, the value of a change in wealth is more important than the actual value of money. Here, the change in possibility of a given outcome actually becomes more important than the number itself. Subjectively, the change from 0 to 5% feels more important than 5 to 10%, even though 10% is obviously higher.
The impact of 0 to 5% illustrates the possibility effect, which causes unlikely outcomes to be weighted disproportionately more. The improvement from 95% to 100% is similarly impactful, called the certainty effect. Outcomes that are almost certain are given less weight than their probability justifies, because people disproportionately fear the 5% possibility that things may not work in their favor. This causes people to be risk-averse.
The possibility and certainty effects also demonstrate our loss aversion, because the slim chance of losing something even when it is 95% guaranteed causes people to take extremely cautionary actions in order to protect themselves.
In terms of bad outcomes, the psychological difference between a 95% risk of disaster and a 100% certainty of disaster appears to be even greater: the sliver of hope looms very large for people. In sum: the decision weights that people assign to outcomes are not identical to the probabilities of those outcomes.
The opposite concept, when people have to weigh options primarily in terms of losses, causes them to take extreme risks in order to hold out hope that they might avoid a loss.
Kahneman asks what the reader would prefer in two problems: A. 61% chance to win $520,000 OR 63% chance to win $500,000. B. 98% chance to win $520,000 OR 100% chance to win $500,000. Most people prefer the first choice in problem A and the second choice in problem B, but this violates logic because one should be consistent in favoring either an improvement in odds or an improvement in potential winnings. This problem, introduced in 1952, came to be known as the Allais paradox. It is explained by the certainty principle.
The Allais paradox demonstrates an instance in which even numbers are subjective. Instead of following rationality, people make personal (and somewhat illogical) choices based on a sense of numbers that does not stem from their actual probabilities, but instead our subjective weighting of those probabilities.
Table 4 (page 315) shows people’s “decision weights,” demonstrating that on the low end, unlikely events are overweighted (a 1% chance for an outcome gives it a 5.5 decision weight). On the high end, highly probable events are even more underweighted (a 99% chance has a 91.2 decision weight). This is because the fear of losing an almost-sure thing weighs more than the slight hope of an incredible unlikely thing.
Because our decision weights are subjective and based on various biases, it makes sense that those weights are subject to loss aversion as well. The way we view probability is not based on objective value, but instead based on our automatic emotions about the potential outcomes.
Probabilities that are extremely low or high (less than 1% or more than 99%) are sometime ignored, but if they are not, we tend to overweight them. Additionally, people are almost completely insensitive to variations of risk among small probabilities. A cancer risk of .001% is not easily distinguished from a risk of .00001%, even though the former would translate to 3,000 cancers in the United States, and the latter to 30.
These statistics in some ways recall chapter 13, in which people were asked to compare causes of death. It is hard to estimate certain causes accurately because they are vastly overrepresented in the media, and therefore we are biased to believe they are more important or more frequent than they actually are.
When we pay attention to threats, we worry about them, and our worry is not proportional to the probability of the threat. In an example, a $10 insect spray causes 15 inhalation poisonings and 15 child poisonings per 10,000 bottles of it. Parents are willing to pay an additional $2.38 to reduce the risks by two-thirds, and an additional $8.09 to eliminate it completely.
In a situation in which a person’s well-being is at stake, people become even more loss averse because even a slight uncertainty of a good outcome means that a life might be lost, and people understandably do not want to take the risk that it might be their child.
The fourfold pattern is described as follows: in terms of gains, with a 95% chance to win, there is fear of disappointment, people become risk averse, and accept unfavorable settlements. With a 5% chance to win, there is hope of large gain, people become risk seeking, and they reject favorable settlements (this explains why lotteries are popular).
The possibility effect is one of the biases at play here, particularly in the lottery example. Combined with the fact that people do not deal well with extremely small probabilities, even the slim chance of enormous gain makes people want to take a risk and buy into it.
In terms of losses, with a 5% chance to lose, there is fear of large loss, people become risk averse, and they accept unfavorable settlements (this is why people buy insurance). With a 95% chance to lose, there is hope to avoid loss, people will become risk seeking, and will reject a favorable settlement. This leads people to make desperate gambles in the small hope of avoiding a large loss.
When speaking about losses, people act essentially in the same way (avoiding risk when they are almost guaranteed not to lose and seeking it when they are nearly certain to lose)—again, all in the hopes of avoiding losses.
Kahneman then applies the fourfold pattern to a court case. Plaintiffs with good chances will want to take unfavorable settlements because they worry about their odds of losing, even though those odds are slim. Defendants with bad chances, on the other hand, will try to push for court because the sure loss of a settlement is painful. In this face off, the defendant holds the stronger hand.
By applying the fourfold pattern to a court case, Kahneman demonstrates how inherent biases can have serious and real-world consequences. In this scenario, we empathize with and understand the plaintiff’s position, even though we know we are underweighting the probability that they would win in court.
Kahneman contrasts this case with a frivolous suit, in which a plaintiff with a flimsy case files a large claim. They overweight their success and are aggressive in negotiating a settlement. For the defendant, they want to avoid the small risk of a very bad outcome, and so the plaintiff holds the stronger hand.
In this case, we again understand the viewpoint of the defendant in the case and can experience simply by reading about the suit how our emotions guide us to act in the exact same way.
It is easy to empathize with the plaintiffs and defendants who do not have the stronger hands. However, in the long run, this strategy can be costly. If the City of New York faces 200 frivolous suits each year, with a 5% chance to cost the city $1 million. If the city settles each case for $100,000, its total loss will be $20 million. If the city litigates all 200 cases and loses 10, it will lose only $10 million. Taking the long view of these cases demonstrates that paying a premium to avoid a small risk of a large loss is costly.
Following the goals of his book, Kahneman alerts people as to why acting against a rational model can be damaging in the long run. He urges them to acknowledge their bias and to not be controlled by the emotional, automatic response of System 1 in making their choices.