I’ve been thinking a lot lately about community, who we are as a community, what keeps us connected and together, and how community knowledge is stored and distributed. As an anthropologist, my research focuses in part on automation and algorithmic impact on society, in particular, on our relationships and how we maintain them towards common cooperative goals. As such, when technology begins to change our relationship to our local locale (as it has been doing increasingly over time with each new capability), I pay attention to how this changes our physical and social structures, and our relationships to them and to each other.
Recently, Apple Computer, Inc. has branded the privatization of the idea of the commons, by renaming the retail Apple stores as “Town Squares“. In Apple’s definition, these “Town Squares” are where people will gather, talk, share ideas, and watch movies, all within Apple’s carefully curated, minimalist designed, chrome and glass boxes. In this scenario, Apple’s “Town Square” is tidy, spartan, and most critically, privatized. This isn’t new behavior, however, what is new is the context within which Apple is able to do this, from both inside of shopping malls, and from retail locations on Main Streets. Applin (2016) observed that private companies are collecting and replicating community through their networks and communications records . Madrigal (2017) observes that “the company has made the perfect physical metaphor for the problem the internet poses to democracy” . This article provides a discussion of what happens and what we forfeit in these hybrid gathering places between Internet usage and privately owned spaces; and how these hybrid spaces have become enabled in the first place.
In early September, Apple Computer, Inc. launched their new iPhone and with it, FaceID, software that uses facial-recognition as an authentication for unlocking the iPhone. The mass global deployment of facial-recognition in society is an issue worthy of public debate. Apple, as a private company, has now chosen to deploy facial-recognition technology to millions of users, worldwide, without any public debate of ethics, ethics oversight, regulation, public input, or discourse. Facial-recognition technology can be flawed and peculiarly biased and the deployment of FaceID worldwide sets an alarming precedent for what private technology companies are at liberty to do within society.
One of the disturbing issues with the press coverage of FaceID during the week of Apple’s announcement, was the limited criticism of what it means for Apple to deploy FaceID, and those who will follow Apple and deploy their own versions. What does it mean to digitize our faces and use the facsimile of our main human identifier (aside from our voices) as a proxy for our human selves, and to pay Apple nearly $1000 U.S. to do so?
Savage Minds welcomes guest blogger Sally Applin
Hello! I’m Sally Applin. I am a technology anthropologist who examines automation, algorithms and Artificial Intelligence (AI) in the context of preserving human agency. My dissertation focused on small independent fringe new technology makers in Silicon Valley, what they are making, and most critically, how the adoption of the outcomes of their efforts impact society and culture locally, and/or globally. I’m currently spending the summer in a corporate AI Research Group where I contribute to anthropological research on AI. I’m thrilled to blog for the renowned Savage Minds this month and hope many of you find value in my contributions.
There is so much going on in the world that it is challenging to choose a single topic to write about—floods, fires, hurricanes, politics—as anthropologists in 2017, we are spoiled for choice. However, as a warm up for the month ahead, I thought I’d start with a short piece on automation and agency to frame future pieces which will address these topics. The following is a letter I wrote yesterday morning to the House of Lords in the UK, who issued a call for participation on the governance and regulation of Artificial Intelligence, a topic with great importance to me. If done well, AI will benefit many, and if overlooked, or done in haste or without forethought, there could be catastrophic outcomes from poorly designed algorithms, and automation and limitations that permanently alter society as we know it.
The English word “person” has a long and convoluted history. Though the word itself likely derives from the Latin, persona, referring to the masks worn in theatre, its meaning has evolved over time. One of the biggest conceptual overhauls came in the 4th century AD during a church council that was held to investigate the concept of person as it related to the Trinity. Whereas the Greek fathers defined the Trinity as three hypostases, roughly translated as “substances” or “essences,” the Latin fathers saw them as one hypostasis that could be distinguished by the concept of persona. Because both the Roman Church and the Greek Church viewed each other as orthodox, they brushed off the difference of terms as semantics. Over time, this resulted in a conceptual conflation of the terms, effectively leading to persona encapsulating the notion of both the “role” one plays and one’s “essence” or “character” .
It’s difficult to overstate our society’s fascination with Artificial Intelligence (AI). From the millions of people who tuned in every week for the new HBO show WestWorld to home assistants like Amazon’s Echo and Google Home, Americans fully embrace the notion of “smart machines.” As a peculiar apex of our ability to craft tools, smart machines are revolutionizing our lives at home, at work, and nearly every other facet of society.
We often envision true AI to resemble us – both in body and mind. The Turing Test has evolved in the collective imagination from a machine who can fool you over the phone to one who can fool you in front of your eyes. Indeed, modern conceptions of AI bring to mind Ex Machina’s Ava and WestWorld’s “Hosts,” which are so alike humans in both behavior and looks that they are truly indistinguishable from other humans. However, it seems a bit self-centered to me to assume that a being who equals us in intelligence should also look like us. Though, it is perhaps a fitting assessment for a being who gave itself the biological moniker of “wise man.” At any rate, it’s probably clear to computer scientists and exobiologists alike that “life” doesn’t necessarily need to resemble what we know it as. Likewise, “person” need not represent what we know it as.