Artificially Intelligent, Genuinely a Person

It’s difficult to overstate our society’s fascination with Artificial Intelligence (AI). From the millions of people who tuned in every week for the new HBO show WestWorld to home assistants like Amazon’s Echo and Google Home, Americans fully embrace the notion of “smart machines.” As a peculiar apex of our ability to craft tools, smart machines are revolutionizing our lives at home, at work, and nearly every other facet of society.

We often envision true AI to resemble us – both in body and mind. The Turing Test has evolved in the collective imagination from a machine who can fool you over the phone to one who can fool you in front of your eyes. Indeed, modern conceptions of AI bring to mind Ex Machina’s Ava and WestWorld’s “Hosts,” which are so alike humans in both behavior and looks that they are truly indistinguishable from other humans. However, it seems a bit self-centered to me to assume that a being who equals us in intelligence should also look like us. Though, it is perhaps a fitting assessment for a being who gave itself the biological moniker of “wise man.” At any rate, it’s probably clear to computer scientists and exobiologists alike that “life” doesn’t necessarily need to resemble what we know it as. Likewise, “person” need not represent what we know it as.

Things like pain and emotion play an important role in how we empathize and understand one another. They are important aspects of personhood, and often arise in discussions of AI. Does the AI really feel emotion, or is it simply wired to react? It’s important to remember that the biological capability for emotional phenomena are still a product of biological evolution. They exist – or are able to exist – because something about the functional units that give rise to them were adaptive. We very well could have evolved some different method. Likewise, we can’t expect that any being that might be considered a person should have evolved the exact same method as we did. Thus, when we require something to have something like “emotion” as a criterion for personhood, are we asking it to have a personhood trait or a human trait?

It’s often suggested that we simply can’t know if an AI (or a chimp, for that matter) “feels” like we feel, if their emotions are “true.” Since emotion is frequently used as a staple of personhood, this leads to hesitation when considering AI personhood. However, the same could be said between humans. I don’t know if my sadness, my joy, my fear, or any other emotion feels the same to me as yours does to you. I express it such that you can understand what it might feel like. You might even empathize, and have your own feeling that mirrors what I am expressing. However, you still don’t know what I feel. It seems to me that the same can be said for AI. Sure, AI might need to be programmed to react to certain phenomena with an emotional output, but so are we. Just as we can adjust our reaction thanks to neural connectivity between the prefrontal cortex and amygdala, so could AI be programmed to have this capability. Remove those lines of code from the AI, and perhaps it won’t respond like it should to an emotional situation. Remove a piece of our brain, and we won’t either. Alan Turing recognized this as far back as the 1950’s. In defense of his test for machine intelligence, Turing noted that we could just as easily and logically take this solipsistic stance with regards to humans.

Ultimately, it seems that we do not need to question whether or not AI could be persons, but how we will know when they become persons. Fishing through this conceptual dilemma could be well served by comparing personhood concepts in other realms – such as primatology and medicine – to personhood in AI. We never expect other primates to become “human-like,” but we do expect, even are designing, AI to become human-like. Along the road to their human-like nature, they will pick up personhood. Should their personhood be considered on its own terms, as it might be with other primates, or should it somehow be considered in relation to humans. AI will be situated in a unique context, as persons created by persons to interact with persons. Their entire ontology will be intertwined with our existence. This ontological social cohesion of AI and humans will be subject to a particular type of analysis that anthropologists would be best suited to approach. In the same vein as ethnoprimatology, there will need to be a field of research that focuses on the ethnography of human-AI interaction. A field that will sit firmly within the discipline of anthropology.

In May of 2016, Members of the European Parliament (MEPs) drafted a report regarding rules and regulations for how humans and robots interact with each other. BBC reported this past week that MEPs are due to vote on the resolution. If passed, the resolution would then move on to be debated by individual governments. The resolution was drafted in regards robots that have not achieved self-awareness, wherein Asimov’s Laws would be implemented. Interestingly, the resolution was also drafted in part to consider “creating a specific legal status for robots, so that at least the most sophisticated autonomous robots could be established as having the status of electronic persons with specific rights and obligations, including that of making good any damage they may cause, and applying electronic personality to cases where robots make smart autonomous decisions or otherwise interact with third parties independently,” as described in General Principle 31-f. Indeed, it seems the era of robot personhood is upon us, and It would behoove us to consult an interdisciplinary team of scientists, including anthropologists, about proper action moving forward. While this resolution is a good start, there will no doubt be a time when self-aware AI needs to be considered from a legal standpoint.

We will need to proceed with caution in our inexorable pursuit of true AI, lest we find ourselves in a serious ethical dilemma. Since we will be creating AI, we have the opportunity to preemptively consider the implications of their existence. Borrowing from John Locke, one of the most time-honored phrases form the Declaration of independence states that, “we hold these truths to be self-evident, that all men are created equal, that they are endowed by their Creator with certain unalienable Rights, that among these are life, liberty, and the pursuit of happiness.” Very soon, we will be playing the part of the “Creator.” The question now becomes whether or not we will also endow our creations with certain unalienable rights, because if we wait until they demand them, it will be too late.

Coltan Scrivner

I am currently a Ph.D. student in the Comparative Human Development department at the University of Chicago. I’m interested in the evolution of human social behavior and biocultural approaches to studying human evolution. I’m also interested in public understanding of science.

3 thoughts on “Artificially Intelligent, Genuinely a Person

  1. Coltan, allow me to call your attention to the work of Grant Jun Otsuki (, a Japanese-Canadian anthropologist who now teaches at Tsukuba University in Japan and pursues research on Japanese engineers developing humanoid robots. Last fall, I heard him give a fascinating presentation in which he asserted that these Japanese engineers locate humanity in human interaction instead of human biology. In short, if a robot acts sufficiently human, it will be human. His presentation began with a remarkable anecdote, in which a robot in the form of a human woman was sent walking through the landscape contaminated by radiation during the Fukushima No.1 reactor meltdown to the seacoast near the reactor, where it chanted Buddhist sutras to comfort the souls of the dead who had lost their lives during the Great East Japan Earthquake and Tsunami on March 11, 2011.

  2. Actually quite a few anthropologists have already been engaged in research on Human-AI interaction, especially the world of gaming. Rex has his World of Warcraft material, Tom Boellstorff has his work on Second Life. While there is of course a lot of Human-Human interaction happening in these games, there is also a certain amount of AI interaction as well. These works have been quite useful for me in thinking about the very recent advancements in Human-AI relations surrounding the Go community in China, Korea and Japan. In fact, I feel quite confident that many of the ideas you discuss here will be sorted out in East Asia long before the rest of the world starts taking this seriously. Their discourse will also take place outside of an Anglophone setting, so it will be interesting to see how that impacts academic discussions of Human-AI relations in Europe and the US.

  3. I recognise that to end the article the way in which you ended it is to stir up (non-negatively) the reader a little, but it should, not just but primarily, not be the threat of revenge that motivates one’s efforts to grant persons-created-by-other-persons the same rights as their creators’ – but simply a recognition that they are persons.

    You also said that things like “pain and emotion” are important aspects of personhood. While I cannot be certain that this is what you had in mind when you said emotion, I do want to point out that it is probably not the experience of specific, distinct emotions that most people have in mind when they say “personhood”, but, separate emotions being secondary and a product of it, the capacity for emotional contact. The biological structures responsible for people having this capacity are probably, they are an essential part at least, what would have to be replicated if AI truly were to become persons. This has several implications. One, since people interact in pairs (at least), and the structures influencing, or indeed making this interaction possible, are present in every participant, they should be studied, in part, in pairs also. If one amygdala (or some other brain structure, sorry, I don’t have a good understanding of this) does this, how does that influence any change in the other amygdala? Second, since the structures are, evolutionarily speaking, relatively old, it is unlikely that, as you said, if something were to evolve that didn’t have them that we would ever regard them as persons. Lastly, it doesn’t matter how convincingly an AI could imitate emotional contact; as long as they don’t have a structure that makes them capable of this kind of contact, and people know this, most people are unlikely to hold them as possessing personhood.

    Another thing. Wouldn’t it be great if AI were to replicate the social nature of human beings, rather than just be a bunch of individuals made in a new, previously unavailable way? This would connect, similarly to how certain events in the past seem to have had the effect of bringing the interconnectedness of humanity to a new level (such as the invention of paper by the ancient Chinese, or the invention of the internet), humanity a la some grand machine or a bunch of small ones that everyone could have and that could connect, in some way, every person with every other. Freud, in Civilisation and its Discontents, wrote:

    “[C]ivilization is a process in the service of Eros, whose purpose is to combine single human individuals, and after that families, then races, peoples and nations, into one great unity, the unity of mankind. Why this has to happen, we do not know; the work of Eros is precisely this.”

    Maybe that is where AI should be going :-).

Comments are closed.