Paul Van Riper is a tall, elderly man. He fought in the Vietnam War as a commander, and was involved in some of the toughest fighting of the entire war. Soldiers remember Van Riper being a charismatic, intensely fair commander who sometimes took dangerous risks with his soldiers, but who was willing to risk his own life for the good of his company.
Paul Van Riper will be the “main character” of this chapter: an excellent example of how improvisation and snap judgments can be important elements of success.
In 2000, a group of Pentagon officials recruited Van Riper for a highly expensive “war game” designed to train American troops and test new theories about military strategy. In this war game, known as the Millennium Challenge, soldiers would fight against a fictional Middle Eastern military commander who was threatening to pull the entire region into war. Van Riper was cast as that military commander. In most Pentagon war games, the battles (between a heroic Blue Team and a villainous Red Team) take place in a remote part of Suffolk, Virginia. For two and a half weeks in 2000, Van Riper and other commanders participated in the war game, fighting in highly realistic “battles.” Although nobody knew it at the time, the war game anticipated the literal war that the U.S. would fight in Iraq a few years later—except that, instead of a fictional Middle-Eastern dictator played by Van Riper, the U.S. would be fighting a real-life dictator, Saddam Hussein.
The Millennium Challenge has become notorious in the world of military strategy because it inadvertently predicted the war in Iraq: soldiers were being trained to fight a rogue Middle-Eastern dictator, not unlike Saddam Hussein. The Millennium Challenge is also a good illustration of how perfect information and careful considerations of the evidence aren’t always useful components of military strategy; there are times when perfect information can interfere with the decision-making process.
The Millennium Challenge yielded some interesting results. To begin with, the Blue Team fought the war game with the help of a tool called Operation Net Assessment—a decision-making procedure that broke down all military decisions into its economic, social, and political factors. By contrast, the Red Team used improvisational, unpredictable tactics. Casting Paul Van Riper as the enemy commander was a clever choice, because Van Riper had always believed that war was inherently unpredictable—that it was based on commanders making intuitive snap judgments. Van Riper maintained that conventional decision-making strategies—i.e., weighing all options carefully—were too slow for the military.
Operation Net Assessment arguably symbolizes the dangers of conscious, perfectly rational thinking—as Gladwell will show, perfect rationality and evidence-weighing aren’t always as powerful and effective as people think. In short, the Millennium Challenge is a great “case study” for the clash between the conscious, rational mind (as represented by the Blue Team’s military strategy) and the unconscious, intuitive mind (as represented by Van Riper’s Red Team).
As the war game began, the Blue Team, representing the U.S., sent the Red Team (led by Van Riper) terms of surrender. When Van Riper refused to surrender, the Blue Team tried to disable the Red Team’s communications with bombs. Much to the Blue Team’s surprise, Van Riper improvised a series of complex codes for communicating with his red soldiers. The Blue Team assumed that it would be easy to predict what the Red Team would do next—but it quickly found that its predictions weren’t accurate. Then, in a single day, Van Riper struck out against the Blue Team and disabled the vast majority of its ships and aircraft. Despite the fact that the Red Team was badly outnumbered, it crushed the Blue Team. Somewhat like the Getty officials being unable to predict that their Greek statue was a fake, the Blue Team leaders were unable to predict that Van Riper would be able to override their predictions and defeat them.
Right away, the Millennium Challenge exemplifies some of the advantages of intuitive decision-making and some of the pitfalls of rational evidence-weighing. The Blue Team believed that it was making the right decision, but in fact it was wasting valuable time on predictions that turned out to be inaccurate. Van Riper’s greatest strength as a commander was his unpredictability—again and again, Van Riper was able to outwit Operation Net Assessment, perhaps suggesting the limitations of excessive rationality. Like the Getty experts who evaluated the Greek statue, the Blue Team allowed evidence and information to cloud their judgment.
A good example of spontaneous thinking is the art of humorous improvisation. In an improv group, members ask for audience suggestions, and then use these suggestions to create a skit. Improvisation seems incredibly difficult, since it apparently involves making up an entire skit on the spot. But, upon close inspection, it turns out that improvisation isn't as “random” as it might appear. After each show, members of a good improv troupe critique one another’s performances and determine how to improve in the future. Improv might seem random, but in fact, it’s governed by a series of rules.
Improvisational comedy is a great example of the methods of spontaneity. It might seem contradictory to talk about “methods of spontaneity,” (since, one could argue, spontaneity is by definition not methodical) but in fact, Gladwell shows, certain rules govern spontaneous behavior and can provide the proper environment for it to best arise and function.
The crucial lesson of improvisation comedy, and of the Millennium Challenge in which Paul Van Riper participated, is that spontaneity isn’t random. When people are in a high-stakes situation (war, performance, etc.), they act quickly and make snap judgments, but they also follow certain intuitive rules. A good example of a “rule of spontaneity” is the “yes, and” rule in improv: the rule that, when a performer offers a suggestion or new idea, the other performers immediately agree with that idea and use it to “move the scene forward.” It’s important to notice that the “yes, and” rule doesn’t offer any advice for what to do or say in an improv performance; rather it creates the optimal conditions for a good performance. By the same logic, Van Riper’s behavior during the war game might have seemed random a to the Blue Team—and yet, he was careful to “create the conditions for successful spontaneity.”
There is a big difference between spontaneity and randomness. Randomness is chaotic, muddled, and by definition impossible to plan. Spontaneity, on the other hand, can be rehearsed and trained for. For example, Van Riper spent many years as a soldier perfecting his ability to be spontaneous under pressure. While it may seem like a contradiction to say that spontaneity can be practiced, Gladwell argues that there’s no contradiction at all. Even if spontaneous behavior itself will always be unpredictable and to some extent random, it’s possible to perfect the development of the proper conditions of spontaneity.
When Van Riper was fighting in Vietnam, he often heard gunshots in the distance. At first, he made the mistake of radioing his troops in the field to ask about the gunfire. But gradually, Van Riper realized that his troops didn't necessarily know anything more about the gunfire than he did. From then on, whenever Van Riper heard gunfire in the distance, he would “stop and do nothing” for five full minutes. It was better, he decided, to let the soldiers firing the shots resolve the situation themselves than it was to create a potential panic by alerting everyone of the danger.
It might seem like poor leadership for a military commander to do nothing after hearing gunfire in the distance. But in fact, Van Riper’s decision to do nothing reflects years of experience and careful consideration. Van Riper recognizes that by contacting his soldiers in the field, he might be interfering with their ability to resolve a problem.
Van Riper also applied the lessons he’d learned in Vietnam to the war game. He made an effort to cut down on long meetings and introspective explanations. He also warned his troops to cut down on military jargon words like “effects.” In short, Van Riper’s management technique was to encourage people to use rapid cognition.
Not unlike a good comedic improviser, Van Riper tried to optimize the conditions of spontaneity for his soldiers: instead of weighing his soldiers down with excessive orders and questions, he gave them the space and the freedom to “work it out.”
While rapid cognition isn’t perfect, it has some clear advantages over conventional thought. For example, Gladwell says, if someone asked you to identify the person who served you coffee this morning, you could probably do it. However, if someone asked you to describe this same person, or draw their face, you’d probably struggle to remember specific details of their appearance. And then, if you were asked to pick the person out of a police lineup, you’d (presumably) have a much harder time doing so. In short, conscious thought can impede rapid cognition.
Gladwell argues again that rationality can interfere with spontaneity: the act of verbalizing a stranger’s face can prevent you from visualizing that stranger’s face (although, of course, Gladwell’s example here might not apply to everyone). To put it another way, the conscious and unconscious parts of the mind occupy two different “territories,” and when one part of the mind intrudes on the other, problems arise.
The phenomenon by which conscious thought can interfere with rapid cognition is called “verbal overshadowing.” In general, rational thought can overshadow rapid cognition. In brainteasers, rational thought isn’t always enough to solve the puzzle—either you have a “eureka” moment and see the answer, or you don’t see it at all. Furthermore, when people are asked to “talk through” their attempts to solve the brainteaser, they become less likely to solve it successfully. In other words, when people try to verbalize their thoughts, they sometimes sabotage their chances of making an insightful snap judgment.
Verbal overshadowing is an important concept because it shows how excessive rationality can undermine the overall power of the mind. Solving a puzzle or a brainteaser (or, to bring it back to Van Riper, winning a war) isn’t necessarily a rational act; sometimes, the only way to succeed is to use the unconscious mind. Thus excessive rationality and verbalization can undermine the unconscious and prevent it from discovering the solution to a problem.
“There once was,” Gladwell says, a group of firefighters who’d been sent to put out a fire in the kitchen of a house. The firefighters tried to put out the fire, but found that the fire kept burning. Suddenly, one of the firefighters had a sudden impulse to get out of the building—so he shouted for his friends to leave with him. Seconds after they’d left, the floor they’d been standing on collapsed—it turned out that the fire was in the basement. This story is a great example of thin-slicing in action—without being consciously aware of what the danger was, the firefighter was able to make a rapid judgment from behind the “locked door” of his mind, and save his friends’ lives. During the 2000 war game, the Blue Team’s mistake was to rely too excessively on slow, rational deliberation. The Blue Team underestimated Van Riper’s ability to improvise under pressure (e.g., using codes to communicate in secret); also, the Blue Team itself was unable to improvise well—instead, it held long meetings, full of complicated plans and arguments. The Blue Team “extinguished” its ability to make snap judgments.
As the example of the firefighters suggests, rational, logical decision-making has a notable disadvantage: it takes too long. In the heat of the moment (whether in a fire or on the battlefield) people rarely have the time to consider all the evidence fully. Therefore, the best course of action is often to make a “gut decision.” During the Millennium Challenge, for instance, Van Riper succeeded as a commander because he excelled at gut decisions—whereas the Blue Team failed because it relied too heavily on a thorough, information-heavy decision-making process.
Another good example of the importance of snap judgments is the Cook County Hospital in Chicago—the hospital that inspired the TV show ER. Until the late 1990s, the hospital was extremely disorganized: it was loud, poorly lit, and underserviced (there were too many patients, and never enough nurses). Because of hospital conditions, nurses had a tough time diagnosing patients with potential heart disease problems. The standard procedure for diagnosing heart disease involves asking lots of questions (such as “Do you have diabetes?” and “What’s your cholesterol level?”). But nurses didn’t always have time to ask these questions. So, in order to avoid malpractice suits and help as many people as possible, the nurses at the Cook County hospital were forced to admit many patients who might be suffering from heart disease—even though only a small fraction of these patients did, in fact, have heart disease.
Another good example of a high-stakes, “heat of the moment” situation is a hospital diagnosis. Often, doctors and nurses only have a short time to decide whether a patient has heart disease (or any other condition) or not, and the consequences of a bad diagnosis are obviously enormous. As the passage makes clear, the Cook County Hospital had a strategy for avoiding wrong diagnoses: 1) admitting too many people, and 2) asking lots and lots of questions. Gladwell will show how this strategy actually interfered with the process of diagnosing patients.
In 1996, a man named Brendan Reilly became the chairman of the Cook County Hospital. Reilly had been a professor at Dartmouth University, but he wanted to pass on his experience and education to an underfunded hospital like Cook County. One of the first things Reilly did to reorganize the hospital was to use the work of a cardiologist named Lee Goldman. Goldman developed an algorithm, or “decision tree,” for how to treat heart disease patients as efficiently as possible and with the greatest level of success.
The strategy of Reilly and Goldman is basically to put the idea of “thin-slicing” into practice—but on purpose, and in often life-or-death situations.
Goldman’s method of diagnosis involved asking the patient only a small number of questions. It was controversial, however, because 1) doctors believed that more time was needed to perfect the “decision tree” and 2) doctors believed that individual doctors should use their own training and observations to diagnose heart disease, instead of using a fixed algorithm. Reilly chose to use Goldman’s decision tree because he had to act fast to improve hospital conditions. He first instituted Goldman’s heart disease treatment methods in only half of the hospital, in order to compare Goldman’s methods to the norm. After a month, it became clear that Goldman’s methods were far safer and more reliable than traditional methods for diagnosing heart disease. A doctor, using their own training and decision-making methods, could correctly diagnose heart disease about 80 percent of the time; Goldman’s decision tree could do so 95 percent of the time.
By choosing to adopt the decision tree of Lee Goldman, Reilly essentially was ordering his doctors to make life-or-death decisions based on an deliberately limited amount of information: ECG readings, history of heart disease, etc. Before Reilly, doctors at the Cook County Hospital had a different strategy: obtain as much information about the patient as possible. And yet, when the results came in, it was clear that Goldman’s method was the best. By cutting down the diagnosis process to the bare minimum of questions, Goldman encouraged doctors to work quickly and efficiently, and helped them avoid the pitfalls of “overthinking” the diagnosis.
The Cook County Hospital experiment is important because it suggests that sometimes the more information doctors have, the less they know. Intuitively, we might imagine that more evidence is the best way to reach the right decision. But in fact, it’s often better to limit the evidence to a few main points—in the case of Goldman’s decision tree, “the evidence of the ECG, blood pressure, fluid in the lungs, and unstable angina.” Goldman’s ideas have been controversial, because they contradict people’s instinctive trust for information and rationality. Doctors in particular think that a life-or-death decision “must be a difficult decision.” But in reality, Gladwell claims, life-or-death decisions are often the simplest decisions—and, as the Cook County Hospital experiment suggests, there are some clear dangers in overthinking such decisions.
Gladwell acknowledges that Goldman’s findings seem very counterintuitive. One might assume that the best medical diagnosis uses as much evidence as possible. However, Gladwell argues that sometimes, more evidence is bad: as we saw with the Millennium Challenge, excessive evidence can cloud the decision-making process and result in bad decisions. Thorough decision-making is also slowly and inefficient—a major problem at the overcrowded, understaffed Cook County Hospital. By convincing his doctors not to overthink their diagnoses, Reilly improved the overall quality of his hospital.
There was a psychological study in which different psychologists were asked to consider the mental health of a war veteran named Joseph Kidd. Different psychologists were presented with different amounts of information about Kidd—some were given his basic medical records, some were given long interviews with his parents, some were given detailed reports of Kidd’s experiences in the army, etc. Finally, all the psychologists were asked to make predictions about Kidd’s behavior. The psychologists who had more information were more certain about their responses than psychologists with less information, and yet the psychologists were all more or less equally accurate in their predictions.
The study reiterates the basic theme of this chapter: more information isn’t necessarily better. Excessive information is also a problem because it encourages people to be irrationally confident in their decisions. A psychologist who makes a bad diagnosis but thinks she’s right is probably more harmful than a psychologist who makes the wrong diagnosis and isn’t sure if she’s right or not—the second psychologist will be more open to changing her opinion later on.
There are a couple of important lessons to learn from this chapter, Gladwell says. First, “truly successful decision making relies on a balance between deliberate and instinctive thinking.” Snap judgments can be helpful at times and prejudicial at others; similarly, deliberate thinking can be helpful in some situations, but it can also cloud the decision making process. Another key lesson is that “in good decision making, frugality matters.” Excessive information and deliberation might sound good, but they can also interfere with the intuitive cognition of thin-slicing.
Gladwell is not saying that instinct is always better than rationality—just as he’s not arguing that rationality is always preferable to instinct. Rather, Gladwell argues that the best decisions incorporate elements of both rationality and intuition. For this reason, it can be dangerous to incorporate too much information into one’s decision, because excessive information interferes with intuition.
There was a famous experiment in which grocery store shoppers were offered six different kinds of jam, while other shoppers were offered many dozens of kinds. The shoppers who were offered six different kinds of jam bought considerably more jam than those who were offered many more kinds—the reason why is that buying jam is a snap judgment, and the more choices people are offered, the less likely they are to reach a final decision. By the same token, Paul Van Riper tried to limit the amount of excessive information that reached his soldiers in the war game. While the Blue Team weighed down its soldiers with “perfect information,” Van Riper encouraged his Red Team to balance information with intuition—and as a result, he devastated the Blue Team.
The chapter closes with an elegant example of how “too much information” can harm the decision-making process. The customers who had dozens of jam options never reached a decision about which jam to buy: they just continued to weigh the evidence. By the same logic, the Blue Team commanders may have wasted too much time weighing the evidence and carefully considering all available information—while Van Riper acted quickly and instinctively, devastating the Blue Team in the process.
Toward the end of the war game, the Pentagon stepped in to undo some of the “damage” the Red Team had done. Although it was clear that the Red Team was winning the war, the Pentagon officials ordered for the war game to revert to an earlier stage—the ships Van Riper had destroyed were “un-sunk” and the Blue Team leaders Van Riper had assassinated were “un-killed.” Van Riper was allowed to continue with his war against the Blue Team—but he was forbidden from improvising. As a result, the Blue Team beat the Red Team handily. Pentagon officials rejoiced, thinking that they’d proven that perfect information and deliberation were the keys to winning a war. Then, a couple months after the war game was over, the Pentagon received word of a real-life Middle-Eastern dictator who was opposed to the United States. Using the strategy it had developed for the war game, the Pentagon eagerly began to plan a real-life attack on this dictator—and “how hard could that be?”
The chapter closes with another illustration of people’s bias against instinct and intuition. The lesson of the Millennium Challenge should have been that intuition plays an important role in warfare. But the Pentagon refused to give the adaptive unconscious the respect it deserved—instead, it concluded that in warfare, more information and technology is always better. Gladwell implies that the mistaken conclusions of the Millennium Challenge played a major role in the fiasco of America’s military intervention in the Middle East during the Bush presidency of the 2000s. The American military believed that it could use its superior firepower and perfect information to easily depose Saddam Hussein. But, as readers of Blink probably know well, American intervention in the Middle East didn’t go according to plan—suggesting that there’s always an element of randomness and spontaneity in warfare.