Personal Computing: Ordinariness and Materiality

This post is part of a series on the history of computing in sociocultural anthropology.

The introduction of portable personal computers significantly broadened the scope of computing in anthropology. Where centralized mainframe computing had lent itself to large calculative tasks and team research projects, PCs fit more readily into the classic model of the lone fieldworker working primarily with textual material. Through the 1980s, computers achieved a certain ordinariness in anthropological work — the use of a computer for data collection or analysis was not limited to a vanguard group seeking to redefine anthropology, but was rather becoming a typical fact of university life (and, increasingly, life outside the university as well). This ordinariness set the stage for the explosion of social scientific interest in computers that was to come with the introduction of the world wide web and its attendant mediated socialities.

Computer applications for anthropologists were the object of a variety of reviews within the major journals of the discipline as well as specialized publications on computers and the social sciences through the 1980s and 1990s. In addition to reviewing the latest programs (from qualitative analysis software to word processors that could handle unusual scripts), many of these reviews contained some editorializing on the state of computing vis-à-vis anthropology. From a review of these reviews (primarily in American journals), I’ve identified some common themes that emerged as computers became ordinary.

Materiality

One striking feature of these reviews is their concern with the materialities of computing. While an emerging literature in critical theory explored the virtualizing potential of computing technologies — their ability to confound traditional boundaries and decorporealize cultural experience — these articles focused on the mundane pragmatics of ethnography conducted with the personal computer. In a 1987 article in Current Anthropology:

Fischer was quite surprised by the lack of problems using floppy diskettes in the very dusty environment of the Punjab. The primary base was a small house with open doorways and windows, and the only precautions taken were to face the disk drive away from the wind and to store the diskettes in dust-proof boxes.

In a note in the 1986 Bulletin of Information on Computing and Anthropology:

It is really very sensible to take back-up copies of the software: Roy sent a tape in his briefcase through the security x-ray machine at Gatwick (i.e. in the first hours of his journey), and wiped the tape. Fortunately he had another copy in other luggage…

Other concerns included the weight of computers that had to be carried in backpacks, the unreliability of power mains in non-Western urban centers and the unavailability of electricity in more remote areas, necessitating batteries with solar chargers which had to be moved inside and outside with the passing of rainclouds. The material particularities of computing brought into relief the dependence of these technologies on contextual supports that were not available in many of the places anthropologists conducted fieldwork. In a way, computers were not only tools for supporting the obvious tasks of ethnographic data collection and analysis; they served as instruments for reckoning local infrastructure, as difficult mismatches made clear that computers were technologies designed for use in certain places rather than others, computing’s specificity to particular ways of life became evident.

Fieldwork Practices

PCs made it possible to take some methods for formal data elicitation and analysis out of the lab of “white room ethnography” and into “the field” proper. This move placed the computer at a crucial juncture in the tacking back and forth between “field” and “home” that characterizes anthropological knowledge production. Bringing the lab to the field could facilitate strong knowledge claims,1 but it also brought into question an idealized vision of “the field” as free of advanced technologies like the computer and the analysis stage of research that it represented.

This period also saw computers starting to be used for storing and analyzing textual data — from one’s own field notes to interview transcripts or local newspaper articles — in addition to the numerical (or categorical) data they had primarily been used for. As Bernard and Evans wrote in 1983, “We have learned that computers can crunch words just as handily as they crunch numbers, and there are interesting things ahead.”2 These qualitative data analysis programs built on earlier formalized methods for analysis such as grounded theory. Many of these methods came into the anthropological toolkit from the humanities, which had focused on textual rather than numerical applications for computing since the 1950s.

Reviews of computing applications for anthropologists evince some anxiety about the material rearrangement of fieldwork around these new tools — for instance, how much time one spent in front of the computer rather than in the village or how teams of researchers could format their data so that it could be effectively combined in a computer representation. Work was required to make PCs “fit” in these new settings they had not been designed for — both in terms of humidity or dust and the epistemology of fieldwork.

Anthropologists typically incorporated PCs into their existing fieldwork practices, using word processors to collect field notes, to do basic statistical operations when they had quantitative data, and, when back from the field, to prepare manuscripts for publication. Many of their uses for calculation were so ordinary that many publications that appear to have used computers don’t even mention it. For anthropologists more interested in computing, this failure to take advantage of the unique and potentially transformative capacities of computers was a disappointment — using “new tools for old jobs” — and it led some to advocate for a “move from computing in anthropology to a true anthropological computing.”

Ordinariness

This concern echoed the earliest discussions of computing in anthropology, centering on the question of whether the computer was truly transformative or just a more efficient tool to conduct business as usual. In principle, the digital computer is not capable of doing anything that couldn’t be done by hand with enough time. In practice, computerization of scientific research programs from physics to sociology generally only occurred for methods that had already been configured as “computational,” even when computerization was considered post facto to have been a transformative tool. We might remember Hymes’s point from 1965 that computers, if they didn’t herald a transformation in what it was to do anthropology, at least encouraged a research ethic of explicitness and formalism that could be an end in itself.

The desire for novelty in method resonates with the broader discourse about the PC “revolution” popular among technologists at the time: figures like Ted Nelson posed the PC as a liberatory technology that made it possible to break free from centralized mainframe computing and its supporting social and corporate structures. However, as PCs were taken up, they became ordinary rather than transformative — as Bryan Pfaffenberger put it, “the personal computer revolution was no revolution,” but rather a slow process of building on existing understandings of what computers could be. For large computer companies like IBM, the “freedom” offered by personal computing was no real threat, and networking was already anticipated to draw these individual machines back into a relationship of centralized control.


  1. This is essentially the argument Bruno Latour makes in The Pasteurization of France about Pasteur’s research practice. 
  2. From the same article: “It is now reasonable to think of little computers as if they were telephones: that is, just as it is not necessary for the user to know about laser optics in order to make a transatlantic call, many tiresome tasks can now be handled on microcomputers without knowing how the machines or the programs work. Even more important, many tasks that could not have been handled at all can now be made short work of. Purists, of course, will argue that programming skills are essential if you want to get the most out of computers, and they are right. But we feel that many new and clever uses of microcomputers will come from new and clever ways that nonprogrammers use available software.” 

5 thoughts on “Personal Computing: Ordinariness and Materiality

  1. “Purists, of course, will argue that programming skills are essential if you want to get the most out of computers, and they are right. But we feel that many new and clever uses of microcomputers will come from new and clever ways that nonprogrammers use available software.”

    I’m not going to articulate this very well, but here goes anyway. If you think about academic communication in terms of sources and methods, the scholarly apparatus discloses the sources and the methods are discussed in the body of the manuscript. Computer programs seem to occupy this grey area. On the one hand, I see how one might justify the inclusion of a computer program in the scholarly apparatus insomuch as it is a source, though in the sense of being a source for doing something (i.e., a tool) rather than as a source of information. On the other hand, a lot of folks seem to think of the use of PCs as a method, and I am less inclined to agree that that is the case. When I participated in the SIRD one of our instructors told us that grant proposals regularly come across his desk stating that “the data will be analyzed using SPSS.” That’s like submitting a book proposal declaring that the manuscript would be produced using Microsoft Word (or Scrivener or LaTeX), right?

  2. Thanks, Matthew. I have definitely seen that kind of attitude towards software and method, in grant proposals and elsewhere. What it usually means in the case of SPSS, I imagine, is “I might use this, but I probably won’t, or at best I’ll do some basic descriptive statistics on something that will end up not being terribly important to the overall analysis.” That sort of follows on the trends I described in this post of computers being more or less slotted into ordinary, older research practices.

    Your user of the phrase “grey area” to describe the role of software reminds me of the really excellent book Evil Media, by Matthew Fuller and Andrew Goffey. It’s a strangely written and enticing book that uses the term “gray media” to describe the weird interstitial stuff like spreadsheet software, getting-things-done schemes, project management systems, recommender systems, and the like, which are routinely neglected by media studies. It’s a fascinating and sorely understudied topic.

  3. Nick, this is a great conversation we are having, and I hope that we get to continue it. Since, however, we are drawing near the end of the series of posts you have promised, let me toss a few additional thoughts at you, based on my own research.

    The problem that I have been thinking about for decades (yes, I am that old) was posed for me in the introduction to Clifford Geertz’s Islam Observed, where Geertz observes that ethnographic insights based on research in microscopic settings cannot be evaluated in those settings. If they are genuine insights, they must prove their value in larger conversations. For someone who started out studying China and has lived and worked in Japan that became for me the question how to extend the ethnographic method I learned from reading Victor Turner to people who spend their lives in groups and places much larger than Ndembu villages. The model was clear: (1) understand the social structure; (2) examine social dramas rooted in the contradictions inherent in the social structure; and only then (3) explore in detail the symbolism employed in those social dramas.

    Back then a methods textbook I read suggested that research could be considered a matter of asking n questions of m subjects. Sometimes one question to one subject is vital, “Do you love me?” ” Will you marry me?” for example. In the best of all possible worlds both n and m would be very large, lots of questions to lots of subjects. That wasn’t, however, feasible for anyone but governments or large corporations. Thus, practically speaking, the choice came down to hypothesis testing, a few highly selected questions to large-enough random samples of subjects, or exploratory research, lots of questions, often unanticipated ones, of a small number of subjects, i.e., ethnography. The radical change that advances in computing has brought to research life is that I can now do exploratory research with quantitative data, in a way that used to be impossible.

    In my current research, I am trying to understand the world of advertising creatives in Japan, stars in the world of advertising whose work has been recognized in the annuals published each year after one of Japan’s largest advertising contests. This brings me to how I use computers, running a network analysis and visualization program called Pajek to explore the data contained in my Filemaker Pro database. Each ad in the annuals comes with credits that identify the industry category, sponsor, lead agency and production companies involved in producing the ad, together with a list of key roles and the individuals who filled those roles. Using this data, I can ask basic questions about who worked for which agencies, in which media, in which categories, and also identify the other creatives with whom they worked together in the same project teams. What is radically different from the fieldwork I did in Taiwan on which my dissertation was based is the number of people I am talking about and the ease with which I can now explore and rapidly test “what if” scenarios. To make a long story short, I have data on 7019 creatives who have worked on 3634 ads to which they are connected by over 22,000 roles. And, thanks to advances in computing, I can now do in minutes what would have taken days, weeks, or months when I was in graduate school back in the late 1960s and using a computer meant keypunching data onto Hollerith cards and presenting the deck to the system admin people at the computer center and coming back no sooner than 24 hours later to discover that there was a bug in your program.

    What, then, has happened to ethnography? I don’t do ethnography in the the old sense of spending a year or two wandering around with nothing to do but ask impertinent questions of the people I run into in a place where I have the street smarts of a two-year old and limited language skills. But I have spent thirteen years working for one of Japan’s largest advertising agencies, and my anthropological training equipped me to be what business anthropologists now call “an observing participant.” My topic is an industry that has a very large and active trade press and industry associations and government regulators that regularly published floods of new information. And being a credible insider and someone who knows people who knows people, I am able to arrange interviews with the key figures in the networks I analyze, in addition to reading the books they write and the interviews or industry roundtables in which they are frequently asked to expound their current views. I have a pretty good grasp of industry social structure, can examine the recurring social dramas that surround advertising, and discuss with native experts what they think is going on. My way of doing anthropology is still what I learned from Victor Turner; but thanks to my computers and the mathematicians and programmers who developed the software I used, I can do big-picture stuff on scale orders of magnitude bigger than Vic’s Ndembu villages. Computers have changed my world.

  4. Thanks for sharing your experience, John. How to study networks of people in corporations is a problem that I encounter in my own research on people who build music recommendation services — they’re distributed all over the place, and you don’t really appreciate the challenges of a multi-sited ethnography until you set out on one: How do I justify talking about an academic conference on music informatics in Porto, the ad hoc musical theories of a Silicon Valley DevOps guy, and the music of an east coast experimental composer-turned-software engineer all in one project? Even though I know they’re all part of the same large network and any ethnography works to make coherence out of its object, it’s still a peculiar challenge. There are probably plenty of people who would not consider my research sufficiently ethnographic, but if a historical perspective on the discipline makes anything clear, it’s that what constitutes “proper” anthropology has always been in flux.

  5. “How do I justify talking about an academic conference on music informatics in Porto, the ad hoc musical theories of a Silicon Valley DevOps guy, and the music of an east coast experimental composer-turned-software engineer all in one project?”

    Nick, I don’t know the industry that you are studying in the way that I know the Japanese advertising industry. I am worried, too, that much of what I say may be irrelevant, since you are working on an emerging industry whose institutions have not yet crystallized. But based on my experience, here is what I would do.

    Does the industry have a trade press? If it does, you have other, now invisible researchers, working with you. The journalists who write for that trade press, their editors, and the people they interview are a core network that cross-crosses the industry and connects all sorts of people and events. Note, too, that what the trade press publishes is archival data in the public domain.
    Is there an industry association? If so, it may be generating an ongoing stream of press releases that will point you to what the industry “natives” think are key events and issues confronting the industry.
    Are there clubs, schools, or other associations in which key industry figures participate? Do they hold award competitions?

    In the case of my own research, I can answer “yes” to all these questions. I subscribe to two periodicals, Senden Kaigi (Advertising Forum) and BRAIN. The former covers agencies, campaigns, and marketing- related issues. The latter covers design, copy, and related creative matters. Between them, roughly 300 pages of new material land in my mail every month.

    Dentsu, Japan’s largest agency, regularly posts industry data on its website and presents it in reorganized form in annual state of the industry reports. These materials and other sources are used to produce both government and private white papers on the state of the industry.

    Advertisers, agencies, and various types of creatives all have their own associations. My data are from the annual contest held by the Tokyo Copywriters Club. If I had the time, I could look at comparable data from the Art Directors Club. And still I am only scratching the surface. The media have their own clubs, associations, periodicals and contests. My problem isn’t lack of data or possible connections between different sources. It is standing in front of a fire hose and trying to pick out the drops of water most relevant to my research questions.

    The music recommendation industry is much newer than the advertising industry, so all of this cultural infrastructure may not yet be in place. It is, however, almost certainly in formation, and I am guessing that it was news about the industry, likely delivered through the Internet, that got you interested in it in the first place. Where did that news come from? How did it get to you? Who are the key journalists, analysts and pundits covering that beat? Where is their output archived? Could a web crawler go through those archives and generate a map of connections between the academic conference, the DevOps guy and the composer turned programmer you mention? That might be one way to put it all together.

Leave a Reply