Artificial Intelligence: Making AI in our Images

Savage Minds welcomes guest blogger Sally Applin

Hello! I’m Sally Applin. I am a technology anthropologist who examines automation, algorithms and Artificial Intelligence (AI) in the context of preserving human agency. My dissertation focused on small independent fringe new technology makers in Silicon Valley, what they are making, and most critically, how the adoption of the outcomes of their efforts impact society and culture locally, and/or globally. I’m currently spending the summer in a corporate AI Research Group where I contribute to anthropological research on AI. I’m thrilled to blog for the renowned Savage Minds this month and hope many of you find value in my contributions.

There is so much going on in the world that it is challenging to choose a single topic to write about—floods, fires, hurricanes, politics—as anthropologists in 2017, we are spoiled for choice. However, as a warm up for the month ahead, I thought I’d start with a short piece on automation and agency to frame future pieces which will address these topics. The following is a letter I wrote yesterday morning to the House of Lords in the UK, who issued a call for participation on the governance and regulation of Artificial Intelligence, a topic with great importance to me. If done well, AI will benefit many, and if overlooked, or done in haste or without forethought, there could be catastrophic outcomes from poorly designed algorithms, and automation and limitations that permanently alter society as we know it.

The oncoming onslaught of Artificial Intelligence (AI) is not something that will happen to humanity, but rather something that we ourselves will construct, shape, and enable in the world. Some of us may have more power than others in its implementation and deployment of AI. It is for this reason that is astute for those shaping the governance of our future to both gather data and understanding of concerns surrounding AI, and to take action to protect not only their constituents, but broader humanity and global society—for as we all now realize, digital networks and digital automation is broadly reaching and the smallest digital intent can have unforeseen global repercussions.

There are two points that I would like to personally contribute to for this call, the first being Human Agency and its preservation, and the second being that of Social and Cultural awareness when automating decisions that will impact ethics. Human agency is our capability to make choices and decisions from the options that unfold before us at each point in time. As we move through the world, and as our circumstances change, so do the options from which we may choose to make any given decision. When these are automated, and in the case of AI, severely estimated and automated, the results can restrict human freedom and movement—in any class of society. Furthermore, because these decisions are automated, the cultural and social aspects of each individual as well as our cultural groups, does not become considered. This can undermine peoples’ agency as well as their identity. I refer to ethnicity and agency within a country’s national identity as part of a discussion on ethics, values, and customs within a culture, as well as individual agency and cultural expression within that context. An AI from Michigan in an autonomous vehicle with embedded ethics would suggest one type of cultural values, which may be out of place in Great Britain, where people express their cultural values in different types of vehicular ethical behavior. What does it mean to automate cultural choices and expressions in one area, and deploy those to other locales? (See Applin 2017).

Automation currently employs constructed and estimated logic via algorithms to offer choices to people in a computerized context. At the present, the choices on offer within these systems are constrained to the logic of the person or persons programming these algorithms and developing that AI logic. These programs are created both by people of a specific gender for the most part (males), in particular kinds of industries and research groups (computer and technology), in specific geographic locales (Silicon Valley and other tech centers), and contain within them particular “baked-in” biases and assumptions based on the limitations and viewpoints of those creating them. As such, out of the gate, these efforts do not represent society writ large nor the individuals within them in any global context. This is worrying. We are already seeing examples of these processes not taking into consideration children, women, minorities, and older workers in terms of even basic hiring talent to create AI. As such, how can these algorithms, this AI, at the most basic level, be representative for any type of population other than its own creators?

The impact on society of the digital revolution has had a profound global societal impact and the issues that we have seen with Google and Facebook bumping up against privacy laws and regulations in Europe are a direct result of this cultural mismatch and lack of awareness of other ways of living and life. Thus, one important and critical step for government would be to mandate that teams developing AI include research scientists and contributors from multiple cultures, social classes, ethnicity, and genders.

If this does not happen, the representative power and advantage will be distilled into a very small group of people, who will be designing a system mostly for themselves, with the power and capabilities to extract habits, data, and behaviors from others, all concentrated within the power of technology companies. This is a problem that is ongoing. Google and Facebook have more data (and more relevant data) on citizens than most governments.

If the companies building this future do not include most of humanity—how could the AI they produce be fair, representative, and appropriate for societies?

Additionally, the government should include Social Scientists, particularly anthropologists on a panel or task force as these debates move forward. Anthropologists specialize in understanding groups, and group cultural behavior, and there are anthropologists who have training in technology and technology development.

The public should be made aware that their choices will be changed by AI, that their cultures and genders are likely to not be fully considered by AI, and that if people want to continue to have choices, true agency and choices, equivalent or better to what they have now, that they must understand how critical it is that AI development teams be balanced and representative, and that all of us must be included in the shaping of our future.

 

One thought on “Artificial Intelligence: Making AI in our Images

  1. Sally, are you aware of the work of Grant Jun Otsuki, a Canadian-Japanese anthropologist who has been studying Japanese engineers working with humanoid robots, with particular reference to what makes a robot “human”? I heard him give a fascinating talk last year that, if my memory is correct, centered on the proposition that while North American engineers tend to see humanity as something internal, which leads to a focus on software “inside” the robot, the Japanese engineers take the position that if a robot can be human if it acts like a human. Their focus is on external interaction rather than internal software.

    He also described an eerie instance of a robot being programmed to do something that would ordinarily be entrusted to a human. In the wake of the Fukushima 1 nuclear disaster, a robot was programmed to enter the contaminated zone around the nuclear power plant and chant Buddhist sutras for the victims of the earthquake and tsunami that caused the disaster — a task that would normally be performed by a Buddhist priest.

    Should you be interested in getting in touch with him: http://www.gjotsuki.net

    Note, however, that he has now left Tsukuba University in Japan and moved to New Zealand.

Comments are closed.