Earlier this month, a playful story in the Washington Post explored the question whether an artificial intelligence (AI) might someday run for President of the United States. Fans of dystopic science fiction like The Matrix and Terminator films or the television series Battlestar Galactica (the terrific mid-2000s reboot, not the horrid original) might have thought that they were alone in wondering about such matters, but apparently deep thinkers take seriously the risk that our future machines might rise up and kill us all. A recent New Yorker article noted that Oxford philosopher Nick Bostrom worries that AI “might pose a danger that exceeds every previous threat from technology—even nuclear weapons—and that if its development is not managed carefully humanity risks engineering its own extinction.”
That truly does sound serious and seems to vindicate the title of the Washington Post story: You’re worried about Trump? In 100 years, robots might be running for president.
Yet alongside the murderous machines like HAL in 2001: A Space Odyssey, one can find kindly robots, who exhibit our best qualities, sometimes better than do we humans ourselves: Data in Star Trek: The Next Generation; C3PO, R2-D2, and now BB-8 in the Star Wars films; Wall-E; and the OS Samantha (voiced by Scarlett Johannson) in Her.
Which vision is more accurate? Will Siri and Google’s prototype self-driving cars pave the way for a future like The Jetsons, in which robots benignly serve us, or to one more like The Matrix, in which they farm us to meet their energy needs?
That is probably the wrong question to ask. Appealing science fiction has almost always said more about the world from which it emerged than about the world it imagined.
For example, the original Star Trek television series was born just before the Vietnam War escalated and went badly for the United States; its outlook was generally sunny; although the Enterprise crew were nominally multi-national (and, counting Spock, multi-planetary), its outlook was decidedly that of the can-do American spirit; battles were thinly disguised metaphors for the Cold War (occasionally not even bothering with the disguise). By contrast, darker times bring darker sci-fi. For example, an early episode of Battlestar Galactica featuring the torture of a Cylon clearly evoked Abu Ghraib.
What does the renewed serious and pop-culture focus on the future of AI tell us about the present? Partly, it might simply reflect the fact that while true AI remains a long way off, capacities like voice recognition and artificial speech have improved to the point that one can sometimes forget that our machines are simply executing programs. It is now easier than ever before to imagine a future AI.
Yet exploring the nature of the AI debate reveals something unexpected. Its terms have important implications for how we think about beings who exist in the here and now: non-human animals.
Express Links Between Robot Rights and Animal Rights
The connection between AI and animal rights may be surprising, but it is real and longstanding. Consider that in writing the Washington Post story on robots running for president, journalist Philip Bump sought comments from three lawyers: Pace Law Professor David Cassuto; Nonhuman Rights project president Steven Wise; and yours truly. In addition to a willingness to humor journalists (and perhaps a taste for the spotlight), Cassuto, Wise, and I share an important characteristic: we are all vegans who believe that humans act terribly unjustly in our relations with nonhuman animals.
As each of us explained to Bump, whether someone should be treated with dignity—be treated as someone rather than something—should not depend on whether that someone happens to be a member of the species homo sapiens. It is not surprising that in looking for people to defend the proposition that rights could extend to mechanical beings, Bump found three people who believe that rights should extend to non-human animal beings.
The connection between rights for robots and rights for animals has not been lost on science fiction writers. For example, the 1968 Philip K. Dick novel, Do Androids Dream of Electric Sheep? (on which the film Blade Runner was based) poses the question of what it means to be human in a conflict between humans and androids playing out on a ruined planet Earth. The humans in charge believe that the key is the ability to feel empathy—an ability that the book’s androids lack. Strikingly, a test to determine whether a subject is an android looks at the subject’s reaction to being told that she has received a calfskin wallet as a birthday present. The right reaction, the one felt by empathic humans but not by the unfeeling androids, is one of horror and disgust at the prospect of the killing and exploitation of a baby cow.
Express animal rights themes can be found in other popular science fiction as well. E.T. tells the story of an extraterrestrial rather than of a robot, but aliens and robots occupy roughly the same fraught territory in science fiction. Notably, under the mysterious but benign telepathic influence of the angelic E.T., Elliott and his fellow students release the frogs destined to be killed in his science class. The ability to empathize with one set of others—whether aliens or sentient robots—is connected to the ability to empathize with others more generally, whether they are people of a different race or religion, or sentient animals of a different species.
From Artificial Intelligence to Artificial Sentience
The connection between AI and animal rights is not mere sentimentality. It runs right through the heart of the debate over AI itself.
Even the most advanced current technology falls short of artificial intelligence. How will we know when we have achieved it? One widely discussed (and also widely criticized) measure is the “Turing test,” named for the British pioneering computer scientist Alan Turing. He proposed that a machine capable of answering questions from a human interlocutor so that the human cannot tell whether the answerer is a computer or another human ought to be denominated as intelligent.
Philosopher John Searle proposed a famous criticism of the Turing test based on a hypothetical problem: Suppose you are in a room with a very large code book that enables you to decipher questions posed to you in Chinese, and then to encode your answers in Chinese. You can now carry on a conversation in writing with a Chinese speaker. However, Searle argued, you would be doing so without knowing how to speak Chinese (assuming you don’t in fact know Chinese). Likewise, Searle contended, a computer that simply implements a program—a very complex code book—does not “know” what it is doing. Thus, even if the programmed computer can fool interlocutors into believing that the computer is “someone” rather than something, there is no one home. Just as there is no Chinese speaker in the “Chinese room,” so there is no consciousness—no internal subjective sense of self—in the computer.
Searle’s position is itself controversial. Some theorists of AI think that we do not yet know enough about how consciousness arises in organic brains to say whether it is possible for it to arise in a computer.
I do not want to take a position on whether Searle’s view about the limits of computers is correct, but I do want to note something very important that he and his interlocutors agree about: They agree that there is a difference between something that is very good at answering questions but does not have any subjective states and something that does have subjective states. The latter sort of something is not just a something; it is someone.
In a sense, we have had machines capable of artificial intelligence for decades. A 1980s pocket calculator could instantly determine, say, the square root of seven to ten places, while at most a handful of human savants could do the same. But what that calculator could not do, and what current machines still cannot do, is have subjective experiences. Perhaps Searle is right and computers as we currently imagine them will never become aware. But if they do, we will have something beyond artificial intelligence. We will have artificial sentience.
R2-D2 and BB-8 are charming despite the fact that, speaking only in beeps, they are unintelligible to us humans. Indeed, they are charming partly because they are unintelligible. They are at once alien and recognizable, and what we recognize in them is less human than canine. In Star Wars: The Force Awakens (very minor spoiler alert) BB-8 even becomes Rey’s droid through dog-like behavior: After being told to go away, BB-8 nonetheless follows her.
In Battlestar Galactica, the humans rationalize cruelty to the Cylons on the ground that the latter are only “toasters”—that is, very effective simulacra of sentient beings but really possessed of no more internal subjectivity than a toaster or other inanimate object. If that were true—if the Cylons really lacked subjectivity—then torturing and killing them would indeed have no moral significance. But precisely because the Cylons act in ways that give overwhelming evidence of sentience, “toaster” is understood as a slur.
Is true artificial sentience possible? That is a scientific and technological question to which we do not yet have the answer. But we already have abundant evidence of the reality of non-human sentience among our fellow Earthlings. If and when we create artificial sentience, we should accord the resulting mechanical beings respect. In the meantime, I would suggest that anxieties about the status of those hypothetical beings reflect projected guilt over our hideous disregard for the interests of the billions of sentient animals we exploit, torture, and kill in the here and now.
Podcast: Play in new window | Download
what a fascinating article – focusing on determining whether machines can ever approach human [or animal] status as feeling beings! but one factor explored in the article as a basis for determining a machine’s proximity to human status – the capacity for empathy – is not universal in humans. some humans may be “sentient” – i.e. have some awareness of themselves [as opposed to machines] – but are incapable of feeling for the pain [or even pleasure] of others. sociopathic individuals comprise a segment of such defective humans. wouldn’t it be great if science could develop empathetic machines, and that the details of such development could be used to enable the “reprogramming” of humans who lack this quality? we could virtually empty out our prisons if this were possible!
“You’re worried about Trump? ” Seems like you couldn’t resist –does Trump keep you up at night! You are classed as one of the Thinking Lawyers. There’s something more important than Trump to think about– sound the alarm about H RES 569 a clear and present threat to Freedom of Speech being cooked up by Obama’s robots–they’re already here!
“Is true artificial sentience possible? That is a scientific and technological question to which we do not yet have the answer.” That is eminently arguable, at least for some of us who would contend that this is rather in the first instance a philosophical (especially metaphysical and philosophy of mind) question. The following works, I think, help one see why this is (or should be) the case:
• Bennett, M.R. and P.M.S. Hacker. Philosophical Foundations of Neuroscience. Malden, MA: Blackwell, 2003.
• Bennett, Maxwell, Daniel Dennett, Peter Hacker, John Searle, and Daniel Robinson. Neuroscience and Philosophy: Brain, Mind and Language. New York: Columbia University Press, 2007. (I prefer the arguments of Bennett, Hacker, and Robinson over Dennett and Searle.)
• Descombes, Vincent (Stephen Adam Schwartz, tr.). The Mind’s Provisions: A Critique of Cognitivism. Princeton, NJ: Princeton University Press, 2001.
• Finkelstein, David H. Expression and the Inner. Cambridge, MA: Harvard University Press, 2003.
• Gillett, Grant. Subjectivity and Being Somebody: Human Identity and Neuroethics. Exeter, UK: Imprint Academic, 2008.
• Gillett, Grant. The Mind and Its Discontents. New York: Oxford University Press, 2nd ed., 2009.
• Hacker, P.M.S. Human Nature: The Categorial Framework. Malden, MA: Blackwell, 2007.
• Hacker, P.M.S. The Intellectual Powers: A Study of Human Nature. Malden, MA: Wiley-Blackwell, 2013.
• Hodgson, David. The Mind Matters: Consciousness and Choice in a Quantum World. New York: Oxford University Press, 1991.
• Horst, Steven. Beyond Reduction: Philosophy of Mind and Post-Reductionist Philosophy of Science. Oxford, UK: Oxford University Press, 2007.
• Hutto, Daniel D. The Presence of Mind. Amsterdam: John Benjamins, 1999.
• Hutto, Daniel D. Beyond Physicalism. Amsterdam: John Benjamins, 2000.
• Hutto, Daniel D. Folk Psychological Narratives: The Sociocultural Basis of Understanding. Cambridge, MA: MIT Press, 2008.
• Pardo, Michael S. and Dennis Patterson. Minds, Brains, and Law: The Conceptual Foundations of Law and Neuroscience. New York: Oxford University Press, 2013.
• Robinson, Daniel N. Consciousness and Mental Life. New York: Columbia University Press, 2008.
• Smith, Christian. What Is a Person? Chicago, IL: University of Chicago Press, 2010.
• Tallis, Raymond. The Explicit Animal: A Defence of Human Consciousness. New York: St. Martin’s Press, 1999 ed.
• Tallis, Raymond. The Hand: A Philosophical Inquiry into Human Being. Edinburgh: Edinburgh University Press, 2003.
• Tallis, Raymond. I Am: An Inquiry into First-Person Being. Edinburgh: Edinburgh University Press, 2004.
• Tallis, Raymond. The Knowing Animal: A Philosophical Inquiry into Knowledge and Truth. Edinburgh: Edinburgh University Press, 2004.
• Tallis, Raymond. Aping Mankind: Neuromania, Darwinitis and the Misrepresentation of Humanity. Durham, England: Acumen, 2011.
• Travis, Charles. Unshadowed Thought: Representation in Thought and Language. Cambridge, MA: Harvard University Press, 2000.
“the interests of the billions of sentient animals we exploit, torture, and kill in the here and now.”
Some animals are food. To make a blanket argument that all of these animals raised to be food are being exploited is ludicrous. Yes, there are undoubtedly some instances of poor conditions and the way some of them get killed is probably less than humane, but to claim that it is all “exploitation, and torture” is just as equally wrong. The food chain is an actual thing, and animals are food that we are superbly adapted to take advantage of.
A better point in my opinion, if we were to carry this argument to another link in the chain, is to argue that we have far too many people on this planet, and far too many people that are treated poorly and unjustly. If we are to think first of the animals before the AI, then shouldn’t we also think first of the people on the planet already before we consider the animals that are their food? Before I worry terribly about the plight of an animal raised to be sustenance, I am much more concerned with the condition of neglected children and people that are living in squalor and can’t afford to take care of themselves. Let us bring the argument to the point of absurdum shall we?
Let us institute a “birth cap” in the name of animal rights. Each person is allotted a half child each. they may find another with their allotment also intact, and raise one child. Then after birth, they are both sterilized. this cuts down on the resource drain affecting the planet, and in turn allows for less unemployment, less social welfare programs and better management of public dollars as well since there are less people in need of support. Less people means we also need to raise less animals, so that means much less exploitation and torture as well for these animals. Less people would mean less pollution, and we can cut back on global warming as well.
In reality, I am not offended by the article, I just don’t agree with the stretch to apply AI rights to the “rights” of animals raised for food. But the worst is your statement that was much too broad, inferring that all of these animals are tortured. And for the record, I am not a vegetarian, but I do raise all of the animals that I eat. Not everyone can do this I know, and it is a luxury that I can afford, but there has to be a way to supply the population with meat and food that is at least affordable. There is obviously no easy answer, but if we are to enter into a frank discussion it should be recognized that doing away with animals as food for a very large population is never going to happen. So blanket statements such as you ended with (implying in my opinion that meat as food should go away) will never allow for a real discussion to truly start.
There a direct connection happening in an animal rights case right now. Copyright office won’t register items created by non-humans. PETA is representing the monkey who took his own pic against the humans!: http://www.peta.org/blog/monkey-selfie-case-animal-rights-focus/