Computing: From Method to Object

sugita1987

Display system, from Shigeharu Sugita’s 1987 “Computers in Ethnological Studies: As a Tool and an Object


This post is the final part of a series on the history of computing in sociocultural anthropology.

The 1980s marked a significant shift in the history of computing and anthropology. Up to this point, computers were primarily considered tools that could be incorporated into anthropological methods. Georgina Born has described this instrumental attitude as “modernist,” based on the assumption that computational tools are basically rational and thus “a-cultural.” A number of coincident developments during the 1980s complicated this assumption, shifting computing from an anthropological tool into an object of study in its own right. With the spread of PCs, computing left university or corporate mainframes, entering and influencing traditional anthropological field sites as well as newer ones, such as the workplace. With more anthropologists heeding Laura Nader’s 1969 call to “study up” and the increasing influence of science and technology studies on anthropological research programs, the scope of anthropological interest also spread, incorporating “high-tech” sites where computers had already become well-established tools. Along with the increased interest in the cultural politics of method heralded by the reflexive turn, these moves brought computers into the frame for anthropology — to serve not only as ready-to-hand tools but as present-at-hand objects of anthropological interest. Anthropologists began to encounter computers not only as tools that they might use or avoid, but as cultural artifacts to be studied anthropologically.1

In spite of anthropology’s historical interest in technology and its methodological engagements with computing, the discipline was relatively late to consider the cultural aspects of computers — later even than computer scientists were to take on society and culture as important features of computer systems. Researchers in Human-Computer Interaction, Information Science, and, later, emerging interdisciplinary research programs such as Computer-Supported Cooperative Work and Social Informatics had begun investigating the social, cultural, and organizational contexts in which computing took place since the late 1970s. Jonathan Grudin described this as “the computer reaching out” beyond its material boundaries, and researchers drew on methods from across the social sciences, including ethnography — although rarely with the intensity typical of anthropology. This research was distinguished from other humanistic and social scientific interest in computing by its focus on providing resources for system designers — a distinction that persists to the present day, even with a series of “turns” toward participation, users, and communities in framing the work these designers do.

When anthropologists did engage with computing as an object, it was often through dialogue with these other disciplines and in interdisciplinary spaces such as the Society for Social Studies of Science. Around 1990, the emerging anthropology of computing was closely tied to the anthropology of work, where it was typically concerned with the question of whether a transformative “computer revolution” was occurring. Although the general consensus among anthropologists was “no” — society made computers rather than the other way round — this work was practically overwhelmed by a widespread popular and academic discourse that suggested computers were “impacting” or revolutionizing society, so even those who disavowed such a “technicist” position spent much of their writing arguing on this topic. In taking on these debates, anthropologists joined STS scholars, sociologists, philosophers, and historians of science and technology who had already developed similar ideas about the sociocultural aspects of technological systems.

In dialogue with STS, many anthropologists interested in computing and medicine took up the figure of the Harawavian “cyborg” as a useful way to think about the assemblage of human and machine components, while questioning the essential humanness or machineness of either. Many anthropologists used the language of actor network theory as a way to describe these heterogeneous sets of “actants” without the normative assumptions of revolution or efficacy, although others criticized the militaristic, capitalistic, and agonistic terms through which ANT presumed actants to interact. These debates and borrowings have continued, fairly intact, into the present. In spite of some hand-wringing through the 1990s about what might essentially distinguish the anthropology of science and technology from STS, by now the anthropology of technoscience generally and STS have blurred to the point where distinguishing them serves little purpose.

At this point, the narrow focus on disciplinary anthropology that has constrained this series so far stops being useful: to focus on research that is “properly anthropological” would be to miss a multidisciplinary explosion of scholarship on computing’s sociocultural aspects. This work, from Lucy Suchman‘s pioneering research at Xerox PARC to more recent analysis of “postcolonial computing,” extends and adds to themes from the longer history of anthropology. It offers critical alternatives to a discourse about computing that allows only for debate about whether a method or technology is truly “new” or “effective.” Turning away from these questions allows for other, more pertinent questions about life lived with technology: learning how computers are socioculturally emplaced helps us to see more thoroughly what computational methods are and how they became that way.

However, as I hope this series has made clear, the distinction between research objects and methods is not always clear. The challenge for contemporary anthropologists is to make sense of computing when it is both an object of study and a tool for making anthropological sense of the world. Ethnography has long played in the confusion between objects and methods, generating knowledge not just from the application of methods to objects, but from questioning their separation. Our objects are often others’ methods and vice versa. As we encounter computational practices out in the field which could have, in other moments, have been called “anthropological,” we would do well to re-examine how older debates about computers, knowledge, and culture have played out.

That’s a wrap on this series of posts on computers and sociocultural anthropology. If you’ve got your own stories about anthropologists and computers, from the past or present, please share them! Thanks for reading.


  1. This shift can be registered in a fact related by David Hakken: While anthropologists engaged in computational methods had long sought a “Computing Unit” in the American Anthropological Association, they were ultimately unsuccessful. But in 1988, a committee on “Computing as a Cultural Process” was established in the General Anthropology Division. The committee was eventually renamed the Committee on the Anthropology of Science, Technology, and Computing (registering the influence of STS on this work in anthropology), went dormant by the early 2000s, and has recently been revived. The reemergence of CASTAC and the recent establishment of the Digital Anthropology Interest Group in the AAA point to an in-progress reconfiguration of anthropology’s engagement with computing in the disciplinary mainstream. 

One thought on “Computing: From Method to Object

  1. Hi,Nick. I keep hoping that some of our other colleagues would chime in here, at least with their own experiences with computing. One issue with anthropological attempts to address computing as an object of inquiry may simply be generational differences. As I have mentioned before my first contact with computers involved Hollerith cards, FORTRAN, and and IBM 360 mainframe. My daughter’s first contact with computers was playing with a paint program on a 512K “Fat Mac” c. 1985. My grandkids have played a bit with both desktop and laptop computers; but their most frequent computing experience is smartphones and tablets.

    But, turning to another topic. Your review omits artificial intelligence (AI) research, which involved attempts to address core anthropological issues. Is passing a Turing test enough to qualify as human, for example. Back in 1979, when I was an unemployed anthropologist, and just before my wife brought us to Japan, I spent a year as a research assistant in Roger Schank’s AI project at Yale, writing concept frames and scripts in LISP for a program called FRUMP (fast reading, understanding, and memory program). The assumptions behind the program included Schank’s view of human cognition as a process of applying concepts and scripts to experience, in this case news read directly from the AP ticker and the assumption that journalists are trained to put the essence of their story in the lead sentence, with details to follow. Thus, it seemed to make sense to write a program that could parse lead sentences, determine the concept it conveyed, then use the concept to trigger a script that would look for additional information. Our attempts to code this process ran up against the fact that natural language is inherently ambiguous, even the most objective, “just the facts, ma’am” journalists use a lot of metaphor, and in talking about business and sports many of the metaphors are military. A piece of black humor running around the project envisioned a future in which a satellite loaded with hydrogen bomb tipped missiles orbits the earth, ready to obliterate any country that threatens world peace. On earth a programmer is frantically typing, “No FRUMP, no, ‘Russia crushes Israel'; that’s just a soccer game.” I also recall one of the most aptly named academic articles I’ve ever seen. By computer scientist Dwight McDermott, it was titled, “Artificial Intelligence Meets Human Stupidity” and addressed the issue of how to get computers to replicate the common kinds of mistakes that humans make in interpreting what each other have to say.

Comments are closed.