Weapons of Math Destruction

by

Cathy O’Neil

Teachers and parents! Our Teacher Edition on Weapons of Math Destruction makes teaching easy.

Weapons of Math Destruction Summary

In Weapons of Math Destruction, mathematician and writer Cathy O’Neil explores the world of Big Data—and its insidious, fast-growing control over almost every aspect of modern life. In order to understand why faulty, invasive new algorithms are so widespread and powerful, O’Neil explores the history of how models have been used since the 17th century to determine things like who can buy insurance, how students learn, how politicians run their campaigns, and what kinds of sentences criminals receive.

O’Neil is a mathematician who has put her skills to use as a professor of mathematics, as a quant for the hedge fund D.E. Shaw, and as a data analyst for numerous start-ups. She became disillusioned with the so-called “Big Data economy” around the time of the 2007-2008 financial crisis. The crisis originated because lenders were using subprime mortgages to create mortgage-backed securities—in simple terms, an entire economy was being built on nothing. These people had misused math—a sacred tool to a passionate mathematician like O’Neil—and destroyed lives in the process. As O’Neil began thinking about data’s stranglehold on modern life, she started to investigate the role of mathematical models in some of humanity’s most important institutions: schools, insurance companies, the justice system, and more.

By examining how inaccurate teaching assessments and biased recidivism models sacrifice fairness and justice in the name of efficiency, O’Neil suggests that harmful computer algorithms that she terms “weapons of math destruction,” (WMDs) have taken over what used to be analog, human-driven processes. Instead of a banker meeting with a pair of newlyweds before determining whether their bank should offer them alone, predatory e-scoring models and other WMDs now determine who’s fit to receive what kind of loans or credit. And instead of a judge sentencing a person based on the severity of their crime, criminals are now subject to models that size up their family members’ and acquaintances’ criminal histories, deciding whether they’re likely to offend again based on their home environment.

Weapons of math destruction must meet a few main criteria: they must be “opaque,” widespread, and damaging. In other words, their methods of gathering data (or the purposes for which they use that data) must be hard to decipher, their influence must be vast, and they must create hardship or deepen inequality in society. To illustrate how destructive these algorithms are, O’Neil explains how WMDs have infiltrated the college admissions process in the U.S. (and college rankings to boot); how they define job hunts and work schedules; how they determine who is eligible for credit cards, loans, and insurance policies; and how they interfere with political campaigns and elections.

After exploring how faulty or biased algorithms threaten much of U.S. society—from policing to school systems to coffee shops—O’Neil concludes that it will be difficult to dismantle WMDs because of how interconnected society is. The same data that encourages for-profit colleges to send out predatory ads and law enforcement to use predictive policing software categorizes people based on the “risk” they represent to society.

There’s still time for corporations to right wrongs in their algorithms, remove bias from their models, and restore humanity’s hand in making big decisions about the fates of students, workers, and consumers. Machines, unlike humans, have no concept of fairness. WMDs could be used to help people—they could predict spots where child abuse is more likely to occur or stop corporations from using slave labor in their product manufacturing. But instead, they’re just making modern life more unequal (and more automated). O’Neil concludes that while there may never be a single definition of what makes an algorithm fair, it’s up to corporations and lawmakers to set standards for how to hold algorithms accountable and improve how they work.