Carr opens the chapter with Joseph Weizenbaum, an MIT computer scientist who invented a computer program in the sixties that could parse and respond to language. The program, called ELIZA, recognized speech templates and rephrased sentences in question form. ELIZA was modeled to have the personality of a psychotherapist. Strangely, despite the program’s simplicity and obvious artifice, the program caught on. Weizenbaum was shocked by how ready people were to suspend disbelief and become “emotionally involved with the computer.” Engaging with ELIZA was like a variation on Alan Turing’s “Turing test,” in which a person is engaged through the monitor with one computer and one person. In the Turing test, if the subject was not able to distinguish which user was human, then the computer program could be considered intelligent. Those who used ELIZA, on the other hand, knew their interlocutor was a program, and yet they wanted to believe it was real.
ELIZA, a computer program that masquerades as a therapist, calls into question what kind of relationship a user really wants with his or her machine. The fact that humans choose to become emotionally involved with programs they know logically are made by computers is, for Carr, a disturbing revelation. Carr opens the book’s final chapter with ELIZA to set the stage for a discussion on our spiritual relationship to computer technology.
The public’s reaction to ELIZA brought Weizenbaum to contemplate the fascinating question of why the intellectual technology of computers has made the idea of man as a machine so much more plausible. In his book, Computer Power and Human Reason, Weizenbaum suggests that intellectual technologies like the computer have become so important to our society that “they can no longer be factored out without fatally impairing the whole structure.” Following the same pattern as past intellectual technologies, the enmeshment of the computer in our daily lives is a permanent commitment. Wiezenbaum’s book was unpopular with fellow programmers, however, as it warned not only that AI science had major limits, but that we risked losing our humanity if we started trying to assign computers the tasks that make us most human––for example, tasks requiring wisdom.
Here Carr directly relays what has already been a common refrain throughout the book: Computer technology is thoroughly enmeshed in our lives. The extent to which we rely on computers has reached, quite seriously, a point of no return. Carr has illustrated this pattern with many intellectual technologies, and the Net is no different in the breadth and scope of its influence. His worry, once again, is that our fascination with creating artificial intelligence in machines like the computer is detrimental to our very humanity.
Carr explains that the human ability to meld with his tools is our distinguishing trait as a species. When the farmer raises his hammer, the brain reads the hammer as part of his hand. These bonds, Carr points out, go both ways. Tools both extend possibilities and constrict them. A hand holding a hammer can only be a tool for pounding nails. The same analogy can be applied to computers. Carr found that after a period of word processing he lost the knack for writing by hand. Indeed, cursive is disappearing from curriculums altogether. About this phenomenon of tools’ two-way effects, Marshall McLuhan wrote that our tools numb whatever part of the body they “amplify.” The numbing concept is not a new one. Carr suggests that the price we have always paid for technology’s power is alienation. Even mapmaking diminished our internal navigational skills. Carr is not being dramatic but rather advising that, for each new intellectual technology, users make an honest investigation of what skills are being sensitized and what skills are being dimmed.
In this segment Carr builds on the idea that computer technology is affecting our identities. As we amplify our faculties with tools, we actually numb the ability in question. Here Carr harkens back once more to a historical pattern in which new technologies always come at a price. Something is dimmed for each thing that is sensitized. In the case of intellectual technologies, what is it we are really losing?
Carr brings up another fascinating reason for the ease with which our nervous systems merge with computers: social instinct. As humans have evolved, we have become increasingly social beings. A recent neuroimaging study revealed that we have brain regions dedicated to the act of “mind-reading,” or trying to figure out what is going on in other people’s heads. Harvard neuroscientist Jason Mitchell suggests that our high facility for detecting minds has, in the computer age, led to the perception of minds in inanimate objects. Our brains mimic the states of other brains we interact with, so not only are we quick to attribute human qualities to manmade machines, but we are prone to taking on machine qualities ourselves.
In this segment Carr gives us a possible explanation for why we seem so intent on merging with our intellectual technology. His argument, as it culminates here, is that we compare our brains to computers not because they are innately similar but because we are intensely social beings. Using scientific context, Carr urges us to see that identification with computers is a social phenomenon that could have unexpected consequences.
Reliance on highly efficient computer programs, Carr warns, actually can inhibit performance and intellectual choices. In a study done by Christoph van Nimwegan in 2003, two groups of volunteers were asked to solve a puzzle. One was given helpful software and the other unhelpful software. It was, in fact, the group with the unhelpful software that was able to solve the puzzle with fewest mistakes. Van Nimwegan suggests that outsourcing cognitive work reduces our personal ability to build knowledge structures.
Some of these consequences are pragmatically undesirable. As Carr points out in this segment, the studies of Nimwegan suggest that we actually are better problem solvers without the aid of technology because we are forced to build our own internal skills.
The focus on creating ever-more “user friendly” programs for computer users, in this light, does not bode well long-term for human depth of intelligence—especially because search programs place emphasis on the most prevalent, mainstream opinion. The fact that we no longer have to skim the lesser known articles to get to the one most relevant to our topic means we are being nudged constantly towards the most common point of view. Humans know that the easy way is not the best way but, as Carr warns, the easy road is the road search engines encourage us to take. Taylorism is a good analogy. After Taylor, workers in factories began to follow a script written by someone else rather than coming to their own unique conclusions. Computer programs can be useful and ingenious, but the process of creativity is a messy one that cannot be reduced into steps. Computer programs cause us to rely less on our intuition and more on pre-established routes and ideas.
Some of these consequences are existentially disturbing. Carr’s ultimate point is that reliance on computer technology makes for a shallower type of intelligence. Creativity is a messy process, and computer technology and efficiency-centric Net software rob us of the journey to unique conclusions. What Carr is getting at here is that it is the personal journey which makes for the depth and complex architecture in our identities. All we are getting from databases are final answers, and they are likely to be more mainstream and uninventive than what we would have discovered unaided.
Carr brings the final chapter of The Shallows to a close with a callback to the Transcendentalist movement, this time using a scientific study that supports Transcendentalist ideas. In 2008, a team at U Michigan subjected two groups of people to tests designed to tax working memory and the ability to stay focused. They then had one group walk through the park and the other group walk through a busy city street. When re-tested, the group that walked through the park improved significantly, while the other group showed no improvement. The conclusion of the researchers was that “simple and brief interactions with nature can produce marked increases in cognitive control.” On the Internet, there is no comparative oasis of restoration, but only the busy street.
Carr approaches the subject from both a scientific and a philosophical angle. Showing transcendentalist theory prevailing in a scientific study, Carr concludes that one simple reason why using the Net and computer technology diminishes creative ability is that we don’t get that “break” from the artificial. Though we might feel like surfing a different website is a “break” from our work, studies suggest that interactions with physical nature actually refresh and improve our ability to stay focused. Once again, Carr suggests that the transcendentalists may have been right about more things than we give them credit for.
Carr points out that a quiet mind is not only necessary for deep thinking but also for complex human emotions like compassion. Antonio Damasio, director of USC’s brain and Creativity Institute, completed studies showing how complex human emotions are inherently slow. Using neuroimaging, Damasio found that when a subject saw a fellow human experience physical pain, the brain activity was quick. In contrast, empathizing with psychological pain showed less activity, indicating that the process of empathy requires time to unfold. We need to “transcend immediate involvement of the body” to grasp the true moral dimensions of a situation. The experiment indicates that distraction impedes us from experiencing the most subtle (and human) forms of emotion. Carr suggests that not just our power to concentrate is diminished by the way the Net is rerouting the brain—our ability to form complex emotions is being hampered as well.
Carr’s argument in this final chapter appeals to every aspect of humanity. In this segment he goes so far as to warn that the distracted state promoted by the internet could have moral consequences in the real world. Subtle emotions like empathy don’t have the time to develop when we are in a state of Net-induced distraction. If the pragmatic consequences of poorer memory don’t worry the reader, losing the ability to experience a full range of human emotions should do the trick. Carr, as we see at this point, is approaching again and again the same frightening idea that Net usage is diminishing many aspects of our humanity.
It’s true that many are heartened by the change, excited that we are evolving to gain new multitasking skills and shedding abilities “perfected in an era of limited information flow.” The writers in favor of these changes see these new cognitive habits as the only solution to navigating the digital age, but Carr does not find their arguments reassuring. He quotes philosopher Martin Heidegger, who observed in the 1950s that technology might “so captivate, bewitch, dazzle, and beguile man that calculative thinking may someday come to be accepted and practiced as the only way of thinking.” The literary mind and its companion of meditative thinking––attributes historically considered the essence of humanity––may, Carr warns, fall victims to what bedazzled Net users call progress.
Here Carr makes his final appeal to the reader. His entire book has worked towards a singular warning: under the guise of progress, essential parts of our humanity are being lost. Having established that humanity’s essence is located in our ability to complete tasks with quiet minds and meditative, creative thinking––in other words, to complete tasks with wisdom––the reader can see that a new definition of intelligence as calculative has usurped the older, “literary” view of identity. Carr’s scientific context, historical patterning, and spiritual analysis culminate in the chilling conclusion that the Internet is not only changing our brains; it is also changing our identities, and there may be little we can do to stop it.