It’s difficult to overstate our society’s fascination with Artificial Intelligence (AI). From the millions of people who tuned in every week for the new HBO show WestWorld to home assistants like Amazon’s Echo and Google Home, Americans fully embrace the notion of “smart machines.” As a peculiar apex of our ability to craft tools, smart machines are revolutionizing our lives at home, at work, and nearly every other facet of society.
We often envision true AI to resemble us – both in body and mind. The Turing Test has evolved in the collective imagination from a machine who can fool you over the phone to one who can fool you in front of your eyes. Indeed, modern conceptions of AI bring to mind Ex Machina’s Ava and WestWorld’s “Hosts,” which are so alike humans in both behavior and looks that they are truly indistinguishable from other humans. However, it seems a bit self-centered to me to assume that a being who equals us in intelligence should also look like us. Though, it is perhaps a fitting assessment for a being who gave itself the biological moniker of “wise man.” At any rate, it’s probably clear to computer scientists and exobiologists alike that “life” doesn’t necessarily need to resemble what we know it as. Likewise, “person” need not represent what we know it as.
Things like pain and emotion play an important role in how we empathize and understand one another. They are important aspects of personhood, and often arise in discussions of AI. Does the AI really feel emotion, or is it simply wired to react? It’s important to remember that the biological capability for emotional phenomena are still a product of biological evolution. They exist – or are able to exist – because something about the functional units that give rise to them were adaptive. We very well could have evolved some different method. Likewise, we can’t expect that any being that might be considered a person should have evolved the exact same method as we did. Thus, when we require something to have something like “emotion” as a criterion for personhood, are we asking it to have a personhood trait or a human trait?
It’s often suggested that we simply can’t know if an AI (or a chimp, for that matter) “feels” like we feel, if their emotions are “true.” Since emotion is frequently used as a staple of personhood, this leads to hesitation when considering AI personhood. However, the same could be said between humans. I don’t know if my sadness, my joy, my fear, or any other emotion feels the same to me as yours does to you. I express it such that you can understand what it might feel like. You might even empathize, and have your own feeling that mirrors what I am expressing. However, you still don’t know what I feel. It seems to me that the same can be said for AI. Sure, AI might need to be programmed to react to certain phenomena with an emotional output, but so are we. Just as we can adjust our reaction thanks to neural connectivity between the prefrontal cortex and amygdala, so could AI be programmed to have this capability. Remove those lines of code from the AI, and perhaps it won’t respond like it should to an emotional situation. Remove a piece of our brain, and we won’t either. Alan Turing recognized this as far back as the 1950’s. In defense of his test for machine intelligence, Turing noted that we could just as easily and logically take this solipsistic stance with regards to humans.
Ultimately, it seems that we do not need to question whether or not AI could be persons, but how we will know when they become persons. Fishing through this conceptual dilemma could be well served by comparing personhood concepts in other realms – such as primatology and medicine – to personhood in AI. We never expect other primates to become “human-like,” but we do expect, even are designing, AI to become human-like. Along the road to their human-like nature, they will pick up personhood. Should their personhood be considered on its own terms, as it might be with other primates, or should it somehow be considered in relation to humans. AI will be situated in a unique context, as persons created by persons to interact with persons. Their entire ontology will be intertwined with our existence. This ontological social cohesion of AI and humans will be subject to a particular type of analysis that anthropologists would be best suited to approach. In the same vein as ethnoprimatology, there will need to be a field of research that focuses on the ethnography of human-AI interaction. A field that will sit firmly within the discipline of anthropology.
In May of 2016, Members of the European Parliament (MEPs) drafted a report regarding rules and regulations for how humans and robots interact with each other. BBC reported this past week that MEPs are due to vote on the resolution. If passed, the resolution would then move on to be debated by individual governments. The resolution was drafted in regards robots that have not achieved self-awareness, wherein Asimov’s Laws would be implemented. Interestingly, the resolution was also drafted in part to consider “creating a specific legal status for robots, so that at least the most sophisticated autonomous robots could be established as having the status of electronic persons with specific rights and obligations, including that of making good any damage they may cause, and applying electronic personality to cases where robots make smart autonomous decisions or otherwise interact with third parties independently,” as described in General Principle 31-f. Indeed, it seems the era of robot personhood is upon us, and It would behoove us to consult an interdisciplinary team of scientists, including anthropologists, about proper action moving forward. While this resolution is a good start, there will no doubt be a time when self-aware AI needs to be considered from a legal standpoint.
We will need to proceed with caution in our inexorable pursuit of true AI, lest we find ourselves in a serious ethical dilemma. Since we will be creating AI, we have the opportunity to preemptively consider the implications of their existence. Borrowing from John Locke, one of the most time-honored phrases form the Declaration of independence states that, “we hold these truths to be self-evident, that all men are created equal, that they are endowed by their Creator with certain unalienable Rights, that among these are life, liberty, and the pursuit of happiness.” Very soon, we will be playing the part of the “Creator.” The question now becomes whether or not we will also endow our creations with certain unalienable rights, because if we wait until they demand them, it will be too late.