Weapons of Math Destruction

by

Cathy O’Neil

Teachers and parents! Our Teacher Edition on Weapons of Math Destruction makes teaching easy.

Weapons of Math Destruction: Chapter 1: Bomb Parts Summary & Analysis

Summary
Analysis
In August of 1946, the Cleveland Indians lost the first game of a double-header. So, player-manager Lou Boudreau decided to switch up his fellow players’ locations when the opposing team’s greatest hitter, Ted Williams, went up to bat. Boudreau, O’Neil writes, was thinking like a data scientist—he’d analyzed Williams’s hitting patterns and rearranged his own team around it. Nowadays, major league baseball is an enormously data-driven game. Baseball statisticians have spent years making mathematical models based on “every measurable relationship among every one of the sport’s components.”
This passage provides an example of a relatively benign approach to mathematical modeling. It shows that algorithms and data science can be used to create advantageous results without directly harming anyone. Models that measure and interpret baseball statistics aren’t WMDs because they’re generally both efficient and fair.
Themes
Humanity vs. Technology  Theme Icon
Even though baseball models now define how the game is organized, played, and bet on, they’re transparent: everyone has access to the statistics that rule the game. There’s a huge amount of data that’s highly relevant to the outcomes that statisticians and fans are trying to predict, and it’s coming in all the time (12 or 13 baseball games are played each day between April and October of each year). Many of today’s WMDs, by contrast, are mysterious—and they often sorely lack the data for the behaviors they’re interested in. So, they use proxies (stand-in data), like people’s zip codes and the languages they speak, to determine things like how likely they are to pay back a loan.
The baseball models are relatively neutral because they use easily attainable data, and they stick to the numbers provided. But WMDs, through the use of proxy data and data gathered through hidden or manipulative means, are using data for their own unclear purposes rather than to measure or learn something authentically.
Themes
Humanity vs. Technology  Theme Icon
Fairness vs. Efficiency  Theme Icon
Even though baseball models and the model used to evaluate teachers in Washington, D.C. are incredibly different, they are both models: a model is an “abstract representation of [any] process.” Human beings carry models in their heads all day—as an example, O’Neil uses the “informal model” of how she decides what to cook for her large family each night. She has data (each person’s likes and dislikes), and she has new information concerning that data all the time: fluctuating grocery prices, changing tastes, and anomalies like special meals for special occasions.
O’Neil suggests that creating models and building algorithms to accomplish tasks, gather data, and streamline certain processes isn’t inherently wrong. Models are useful, and it makes sense that as technology has grown more sophisticated, algorithms have become increasingly common in many sectors.
Themes
Humanity vs. Technology  Theme Icon
Fairness vs. Efficiency  Theme Icon
Making a model, though, requires simplification—and this means that most models have blind spots that reflect their creators’ judgements and priorities. For instance, O’Neil wouldn’t feed her children Pop-Tarts for every meal, even though her children love them. So, she’s imposing a bias and a judgement on the model of how she feeds her family.
One of the issues with models is that they’re vulnerable to human bias. This means that creators need to account for that bias rather than ignore it—doing the latter can lead to the creation of a WMD.
Themes
Humanity vs. Technology  Theme Icon
Fairness vs. Efficiency  Theme Icon
Quotes
Get the entire Weapons of Math Destruction LitChart as a printable PDF.
Weapons of Math Destruction PDF
Models can change, too, based on their creators and purposes: O’Neil’s children might build a model featuring ice cream at every meal, while a North Korean bureaucrat might optimize the model to feed a family a cost-effective bare minimum. Models reflect our personal realities—and they must constantly change or risk growing stale and irrelevant. Good models can be primitive—but primitive models can be dangerous. For example, a racist’s worldview is based on a single biased and unchanging model that refuses to be affected by new “data” or experiences.
Here, O’Neil illustrates how vulnerable models are to bias, and also to faulty or outdated information and oversimplicity. Again, she’s showing how much human intervention and maintenance models need in order to avoid becoming WMDs.
Themes
Humanity vs. Technology  Theme Icon
Fairness vs. Efficiency  Theme Icon
O’Neil turns to an example from 1997 of how racism is a brutal, unfair model. When a Black man named Duane Buck was convicted of two murders in Harris County, Texas, the jury had to decide whether he’d receive a sentence of life in prison or the death penalty. The defense attorney called a witness, psychologist Walter Quijano, to the stand—and Quijano testified that “the race factor” in the case made Buck’s “future dangerousness” more likely. The jury sentenced Buck to death. The Texas attorney general would later try several cases in which Quijano’s “race-based testimony” had resulted in harsher sentences—but Buck never got a new hearing, and he remains on death row.
This passage provides an example in which an unfair, outdated, inaccurate model—Quijano’s racism—had devastating effects on the person to which it was applied (Duane Buck). O’Neil shares this anecdote to, once again, show how faulty, biased, and ultimately cruel models can be when they’re left unchecked or fed faulty data.
Themes
Humanity vs. Technology  Theme Icon
Discrimination in Algorithms  Theme Icon
Fairness vs. Efficiency  Theme Icon
Prosecutors in Harris County are three times more likely to seek the death penalty for Black people. And sentences imposed on Black men (who comprise only 13 percent of the U.S. population, yet make up 40 percent of the U.S.’s prison population) are 20 percent longer than those for white people. Courts across several states have turned to algorithms called recidivism models in hopes of erasing racism from the U.S.’s court systems. But O’Neil argues that these models can simply mask human bias.
In Harris County, there was a need for an unbiased, data-driven method of assisting judges in making tough calls. So, recidivism models that promised fairness and justice seemed like a logical solution—they’d take human bias out of the equation and thus deliver an impartial recommendation.
Themes
Humanity vs. Technology  Theme Icon
Discrimination in Algorithms  Theme Icon
Fairness vs. Efficiency  Theme Icon
Recidivism models such as the LSI-R (Level of Service Inventory-Revised) are biased against poor people and racial minorities. They ask about prior involvement with the police (something that’s more likely for Black and Latino youths, for instance, who are regularly targeted by police forces through programs like stop and frisk). They also examine whether one’s friends or families have criminal records—again, something that’s statistically more likely in low-income neighborhoods. Recidivism models, O’Neil argues, are WMDs because of the “toxic cycle” of damaging feedback loops and biases that structure them.
While recidivism models might claim to be race-blind and unbiased, they really aren’t. Society is biased, and the model doesn’t do anything to temper that. Therefore, O’Neil is suggesting that it’s unjust for judges to use these models to aid in sentencing because of the implicit bias built into them. This bias can create unfair consequences—a “toxic cycle” that discriminates against and disproportionately punishes non-white people.
Themes
Humanity vs. Technology  Theme Icon
Discrimination in Algorithms  Theme Icon
Fairness vs. Efficiency  Theme Icon
The baseball model and family dinner model discussed earlier, O’Neil writes, are both models that are open and transparent. But the recidivism model, she suggests, is largely invisible—and for the most part, hidden models are the rule rather than the exception. Transparency is important, and yet a hallmark sign of most WMDs (especially those owned by companies like Google, Amazon, and Facebook) is that they are difficult to understand “by design.”
When models are used to determine outcomes as serious as prison sentences, they need to be transparently made and transparently used. But WMDs rely on being inscrutable in order to prevent people from seeing the biased or faulty ways in which they work.
Themes
Humanity vs. Technology  Theme Icon
Fairness vs. Efficiency  Theme Icon
Data, Transparency, and U.S. Democracy Theme Icon
Another major component of a WMD is its capacity to grow or scale. WMDs in human resources, health, and banking sectors are quickly becoming “tsunami forces” that define how people live their lives and whether they can access certain opportunities. Recidivism models like the LSI-R, which are presented as tools of prison reform, are perceived as being fair and efficient—but that couldn’t be further from the truth. 
When WMDs dictate serious decisions or when they are widespread throughout a certain part of society, they become dangerous because of how much influence they exert over humanity. O’Neil compares WMDs to “tsunami forces” to emphasize the idea that algorithms can do immense damage to people’s lives.
Themes
Humanity vs. Technology  Theme Icon
Fairness vs. Efficiency  Theme Icon
Quotes
The three elements of a WMD, according to O’Neil, are opacity, scale, and damage. Not all of the WMDs she’ll discuss throughout the book, she writes, are entirely hidden, huge in scale, or the causes of irreversible damage. But they are all threats in those three arenas on some level. The “menace” of WMDs, she argues, is rising—and the world of finance is an example of what could happen if they spiral out of control.
O’Neil defines a WMD as an algorithm that’s deceitful or unpredictable, widespread, and harmful. These qualities are why O’Neil compares harmful algorithms to weapons of mass destruction (like a nuclear bomb, for instance). Like war weapons, WMDs are a “menace” that can destabilize society, create chaos, and derail people’s lives.
Themes
Humanity vs. Technology  Theme Icon
Discrimination in Algorithms  Theme Icon
Fairness vs. Efficiency  Theme Icon
Data, Transparency, and U.S. Democracy Theme Icon