Paying with Our Faces: Apple’s FaceID

In early September, Apple Computer, Inc. launched their new iPhone and with it, FaceID, software that uses facial-recognition as an authentication for unlocking the iPhone. The mass global deployment of facial-recognition in society is an issue worthy of public debate. Apple, as a private company,  has now chosen to deploy facial-recognition technology to millions of users, worldwide, without any public debate of ethics, ethics oversight, regulation, public input, or discourse. Facial-recognition technology can be flawed and peculiarly biased and the deployment of FaceID worldwide sets an alarming precedent for what private technology companies are at liberty to do within society.

One of the disturbing issues with the press coverage of FaceID during the week of Apple’s announcement, was the limited criticism of what it means for Apple to deploy FaceID, and those who will follow Apple and deploy their own versions. What does it mean to digitize our faces and use the facsimile of our main human identifier (aside from our voices) as a proxy for our human selves, and to pay Apple nearly $1000 U.S. to do so?

FaceID could be considered a gimmick. Apple has the developed technology in hand, and as such, they can then offer this type of “Science Fiction” experience to their phones to give their customers a new way to authenticate their identity. But it isn’t this simple. All new technologies, as with any other new human production, become embedded in society in various ways, used in various unforeseen contexts, and have various unforeseen consequences. Even if Apple is only deploying this technology within the context of its iPhone, they are setting a usage model, and are doing so privately, around the regulation that governs society. This movement from Apple deployed so casually on such a broad scale, may change how we live, and how our faces become used forevermore.

Facial-recognition falls into the category of technology called “Biometrics.” Biometrics is the class of quantification metrics that rely upon some type of bodily feedback to work. Biometrics include digital fingerprint recognition, retinal scans, voice recognition, heat maps, and facial-recognition, among others. Apple has been using digital fingerprint recognition for some time. However, the issues with facial recognition are more complex.

There are several issues with facial-recognition software that have been raised over time, with the idea of algorithmic bias being one of the main ones [1]. Simply put, algorithmic bias exists when algorithms are not able to create complete understandings of a situation or issue. In the case of facial-recognition, algorithmic bias exists because people have different facial features and skin tones, and for humans, particularly those with darker skin tones, facial-recognition software either cannot recognize them, or worse, can recognize a face, but is unable to attribute the recognized face to the person, instead recognizing them as someone different than who they are. This might merely be annoying when the facial-recognition algorithm won’t unlock someone’s iPhone, but can cause severe problems when facial-recognition technology is deployed on a massive scale in various facets of our society. In the future, facial-recognition technology may determine access to the commons, and as such, could easily falsely attribute circumstances and surveillance video “evidence” to the wrong person’s identity, resulting in false accusations at best, and action on false accusations (if we get more automated in law enforcement responses) at worst.

FaceID is automated Artificial Intelligence. This means that there will not be any humans in the process of identification or authentication. Once FaceID is deployed, it will run automatically, identify (or not) automatically, and authenticate automatically. Furthermore, Apple will be using FaceID to unlock the iPhone, for Apple Pay,  iTunes, and other Apple products and services. FaceID will work with other vendors, and share its users’ facial-recognition and authentication with them [2]. This will not be limited to Apple. If we think that having our credit card number being breached is a problem now, what will it mean when our faces are stored insecurely?

Another issue to consider with facial-recognition technology is the idea of what our faces mean to us, and mean to those of us in different parts of the world. For example, in some cultures, tattooing the face is considered to be a stronger taboo, where in others it is a place of honor and prominence. How we use our faces, and choose to use our faces should be considered when technology companies develop facial recognition technologies. Of course, those who are uncomfortable with facial-recognition technology, won’t use FaceID, and for now, while it is still optional, this will not be a problem. However, as FaceID debuts around the world, these issues may be raised, and unforeseen outcomes may emerge.

The technology industry is often criticized for not respecting regulations, or ethics, and as I mentioned in my previous piece [3], much of this comes from not having anyone different on development teams who can raise these issues and questions. Within Apple, there are few Social Scientists, nearly no anthropologists, and with the focus moving towards quantification as a metric for determining feature use and design, few qualitative researchers inputting to products. It might not be that Apple doesn’t care, it might be that Apple truly doesn’t know that it needs to care, or some other reason. As a design focused company, it may be that qualitative research is thought to be something that anyone in design at Apple could do [4] and as such, some of the more pressing social issues surrounding the deployment of FaceID could get lost in the “sci fi” factor or rush to market.

Because we are now on the cusp of biometric facial-recognition being mainstreamed by a private technology company with the decisions for how this will impact all of us in private control, it may be time to consider what governance or ethics review boards would look like for the tech industry going forward—or at the very least, it seems time for private technology companies to hire anthropologists and other social scientists to product teams to create technology products that will adapt to our cultural preferences as humans, while respecting our sense of privacy, our desire for security, and our right to our identities.

 

References:

[1] Finley, K. 2017. Can Apple’s iPhone X Beat Facial Recognition’s Bias Problem.” WIRED Business. Sept. 13. 2017. [Online]. Available from: https://www.wired.com/story/can-apples-iphone-x-beat-facial-recognitions-bias-problem/ Date assessed: Sept. 17, 2017.

[2] Perez, S. and Luden, I. 2017.  Face ID will work with Apple Pay Third Party Apps. Tech Crunch. Sept. 12, 2017[Online.] Available from: https://techcrunch.com/2017/09/12/faceid-will-work-with-apple-pay-third-party-apps/

[3] Applin, S. 2017. Artificial Intelligence: Making AI in our Images. Savage Minds. Sept. 7, 2017. [Online]. Available from: /2017/09/07/artificial-intelligence-making-ai-in-our-images/ Date assessed: Sept. 17, 2017.

[4] Applin, S. 2016. The Automation of Qualitative Methods. EPIC. Jan. 18, 2017. [Online]. Available from: https://www.epicpeople.org/automation-qualitative-methods/ Date assessed: Sept. 17, 2017

3 thoughts on “Paying with Our Faces: Apple’s FaceID

  1. Whilst more attention is given to technologies like FaceID, I’d like to remind everyone that many services are already using image recognition. Facebook is an obvious example. Less obvious is something like Google Photos. Preinstalled on most Android phones, Google Photos by default, uploads the photos you take from your Android device onto their servers. It then sorts the photos so that it understands what breeds of cats or dogs you own, makes compilation videos of your offspring, recognises people in your life. It sorts through all these photos so well you can get it to identify all the photos you have with burgers in them. Location services is enabled by default with the Google Camera app, meaning that many photos have location metadata too. See: https://www.reddit.com/r/Android/comments/66q8g6/thank_you_google_photos/

    I would urge readers here to reconsider what apps or digital services they are already using and what is already being gathered about you from these services. This is but one example. Until recently Gmail scans your emails to serve better targetted ads at you; they’ve stopped scanning emails for ads, but they still scan them. See: https://arstechnica.com/tech-policy/2017/09/google-promised-not-to-scan-gmail-for-targeted-ads-but-for-how-long/

Comments are closed.