Do Androids Dream of Animal Rights?

Updated:

Earlier this month, a playful story in the Washington Post explored the question whether an artificial intelligence (AI) might someday run for President of the United States. Fans of dystopic science fiction like The Matrix and Terminator films or the television series Battlestar Galactica (the terrific mid-2000s reboot, not the horrid original) might have thought that they were alone in wondering about such matters, but apparently deep thinkers take seriously the risk that our future machines might rise up and kill us all. A recent New Yorker article noted that Oxford philosopher Nick Bostrom worries that AI “might pose a danger that exceeds every previous threat from technology—even nuclear weapons—and that if its development is not managed carefully humanity risks engineering its own extinction.”

That truly does sound serious and seems to vindicate the title of the Washington Post story: You’re worried about Trump? In 100 years, robots might be running for president.

Yet alongside the murderous machines like HAL in 2001: A Space Odyssey, one can find kindly robots, who exhibit our best qualities, sometimes better than do we humans ourselves: Data in Star Trek: The Next Generation; C3PO, R2-D2, and now BB-8 in the Star Wars films; Wall-E; and the OS Samantha (voiced by Scarlett Johannson) in Her.

Which vision is more accurate? Will Siri and Google’s prototype self-driving cars pave the way for a future like The Jetsons, in which robots benignly serve us, or to one more like The Matrix, in which they farm us to meet their energy needs?

That is probably the wrong question to ask. Appealing science fiction has almost always said more about the world from which it emerged than about the world it imagined.

For example, the original Star Trek television series was born just before the Vietnam War escalated and went badly for the United States; its outlook was generally sunny; although the Enterprise crew were nominally multi-national (and, counting Spock, multi-planetary), its outlook was decidedly that of the can-do American spirit; battles were thinly disguised metaphors for the Cold War (occasionally not even bothering with the disguise). By contrast, darker times bring darker sci-fi. For example, an early episode of Battlestar Galactica featuring the torture of a Cylon clearly evoked Abu Ghraib.

What does the renewed serious and pop-culture focus on the future of AI tell us about the present? Partly, it might simply reflect the fact that while true AI remains a long way off, capacities like voice recognition and artificial speech have improved to the point that one can sometimes forget that our machines are simply executing programs. It is now easier than ever before to imagine a future AI.

Yet exploring the nature of the AI debate reveals something unexpected. Its terms have important implications for how we think about beings who exist in the here and now: non-human animals.

Express Links Between Robot Rights and Animal Rights

The connection between AI and animal rights may be surprising, but it is real and longstanding. Consider that in writing the Washington Post story on robots running for president, journalist Philip Bump sought comments from three lawyers: Pace Law Professor David Cassuto; Nonhuman Rights project president Steven Wise; and yours truly. In addition to a willingness to humor journalists (and perhaps a taste for the spotlight), Cassuto, Wise, and I share an important characteristic: we are all vegans who believe that humans act terribly unjustly in our relations with nonhuman animals.

As each of us explained to Bump, whether someone should be treated with dignity—be treated as someone rather than something—should not depend on whether that someone happens to be a member of the species homo sapiens. It is not surprising that in looking for people to defend the proposition that rights could extend to mechanical beings, Bump found three people who believe that rights should extend to non-human animal beings.

The connection between rights for robots and rights for animals has not been lost on science fiction writers. For example, the 1968 Philip K. Dick novel, Do Androids Dream of Electric Sheep? (on which the film Blade Runner was based) poses the question of what it means to be human in a conflict between humans and androids playing out on a ruined planet Earth. The humans in charge believe that the key is the ability to feel empathy—an ability that the book’s androids lack. Strikingly, a test to determine whether a subject is an android looks at the subject’s reaction to being told that she has received a calfskin wallet as a birthday present. The right reaction, the one felt by empathic humans but not by the unfeeling androids, is one of horror and disgust at the prospect of the killing and exploitation of a baby cow.

Express animal rights themes can be found in other popular science fiction as well. E.T. tells the story of an extraterrestrial rather than of a robot, but aliens and robots occupy roughly the same fraught territory in science fiction. Notably, under the mysterious but benign telepathic influence of the angelic E.T., Elliott and his fellow students release the frogs destined to be killed in his science class. The ability to empathize with one set of others—whether aliens or sentient robots—is connected to the ability to empathize with others more generally, whether they are people of a different race or religion, or sentient animals of a different species.

From Artificial Intelligence to Artificial Sentience

The connection between AI and animal rights is not mere sentimentality. It runs right through the heart of the debate over AI itself.

Even the most advanced current technology falls short of artificial intelligence. How will we know when we have achieved it? One widely discussed (and also widely criticized) measure is the “Turing test,” named for the British pioneering computer scientist Alan Turing. He proposed that a machine capable of answering questions from a human interlocutor so that the human cannot tell whether the answerer is a computer or another human ought to be denominated as intelligent.

Philosopher John Searle proposed a famous criticism of the Turing test based on a hypothetical problem: Suppose you are in a room with a very large code book that enables you to decipher questions posed to you in Chinese, and then to encode your answers in Chinese. You can now carry on a conversation in writing with a Chinese speaker. However, Searle argued, you would be doing so without knowing how to speak Chinese (assuming you don’t in fact know Chinese). Likewise, Searle contended, a computer that simply implements a program—a very complex code book—does not “know” what it is doing. Thus, even if the programmed computer can fool interlocutors into believing that the computer is “someone” rather than something, there is no one home. Just as there is no Chinese speaker in the “Chinese room,” so there is no consciousness—no internal subjective sense of self—in the computer.

Searle’s position is itself controversial. Some theorists of AI think that we do not yet know enough about how consciousness arises in organic brains to say whether it is possible for it to arise in a computer.

I do not want to take a position on whether Searle’s view about the limits of computers is correct, but I do want to note something very important that he and his interlocutors agree about: They agree that there is a difference between something that is very good at answering questions but does not have any subjective states and something that does have subjective states. The latter sort of something is not just a something; it is someone.

In a sense, we have had machines capable of artificial intelligence for decades. A 1980s pocket calculator could instantly determine, say, the square root of seven to ten places, while at most a handful of human savants could do the same. But what that calculator could not do, and what current machines still cannot do, is have subjective experiences. Perhaps Searle is right and computers as we currently imagine them will never become aware. But if they do, we will have something beyond artificial intelligence. We will have artificial sentience.

R2-D2 and BB-8 are charming despite the fact that, speaking only in beeps, they are unintelligible to us humans. Indeed, they are charming partly because they are unintelligible. They are at once alien and recognizable, and what we recognize in them is less human than canine. In Star Wars: The Force Awakens (very minor spoiler alert) BB-8 even becomes Rey’s droid through dog-like behavior: After being told to go away, BB-8 nonetheless follows her.

In Battlestar Galactica, the humans rationalize cruelty to the Cylons on the ground that the latter are only “toasters”—that is, very effective simulacra of sentient beings but really possessed of no more internal subjectivity than a toaster or other inanimate object. If that were true—if the Cylons really lacked subjectivity—then torturing and killing them would indeed have no moral significance. But precisely because the Cylons act in ways that give overwhelming evidence of sentience, “toaster” is understood as a slur.

Is true artificial sentience possible? That is a scientific and technological question to which we do not yet have the answer. But we already have abundant evidence of the reality of non-human sentience among our fellow Earthlings. If and when we create artificial sentience, we should accord the resulting mechanical beings respect. In the meantime, I would suggest that anxieties about the status of those hypothetical beings reflect projected guilt over our hideous disregard for the interests of the billions of sentient animals we exploit, torture, and kill in the here and now.