Cultural Ecology: Modeling with Computers

This post is part of a series on the history of computing in sociocultural anthropology.

Last week, I surveyed mid-century formalist approaches to computing and culture, which took culture as ideational — a matter of mental states, structures, or content. Ethnoscience and cognitive anthropology epitomized this attitude toward culture, taking part in a cross-disciplinary “cognitive revolution.” As Paul Edwards has outlined, computers were central to the emergence of cognitive science, which was founded on an understanding of the mind-brain relation by analogy to software and hardware. George Miller, a pioneer of cognitive psychology, suggested that computers helped collapse the behaviorist paradigm. Where behaviorism limited psychologists’ theorizing to the mind’s strictly observable “outputs” — lever pulls and all that — the computer offered a model for thinking about “memory, syntactic rules, plans, schemata, and the like.” These notions could be instantiated in actual computers, providing a working model of what was going on in the mind. As Miller said: “We didn’t believe that computers were giant brains, but we could see the similarities.”

However, cognitive and otherwise ideational approaches to culture did not have a monopoly on computational models and methods.

Computers also proved useful for cultural materialists and ecologists such as Marvin Harris (himself a vociferous critic of ethnoscience), who thought of culture as part of a system of relationships with the material environment. For these anthropologists, computers could serve as aids to the computation of resources, populations, and so on, but their usefulness was not limited to making calculations easier. The feedback loops and systemic dependencies of cybernetics and computers provided a way to model what appeared to be similar relationships found in natural cycles of resource flow.1

These ecologies had less in common with digital computers and more with analog computers, in which electronic parts would be arranged into systems analogous to the phenomena to be explained, and their behavior observed. How these systems behaved could then provide insight into control processes at play in other, similarly arranged systems. Establishing these analogical relationships was a form of theorizing or explaining: As Bateson said, “You make two statements, and what is true of both of them is the formal truth. This is what is called explanation.”

A central issue raised by cybernetics was one of representation: what did it mean to say that a computer “modeled” a particular ecological or cultural system? In what sense was a computer like a brain or an ecosystem? This problem opens up a literature much too vast to be dispatched here, but one aspect of particular anthropological relevance concerns the relationship between ritual behavior and environment. Classic texts of cultural ecology, like Roy Rappaport’s Pigs for the Ancestors (1968), and more recent ones, like Steven Lansing’s Priests and Programmers (1991), interpreted ritual practices in cybernetic terms as means of organizing resource flows. For Rappaport, the ritual kaiko pig feast of the Tsembaga in highland New Guinea could be understood as a form of resource management. For Lansing, the Balinese subak system of water temples and spiritual observance could be understood as a way to manage water rights.

These interpretations posed ritual and ecology as a kind of computational system in its own right. However, they notably did not take ritual behavior on its own terms: described in the language of ecologists, Tsembaga kaiko festivals and Balinese subaks were not efficacious for any metaphysical reason, but rather because they effectively coordinated the distribution and maintenance of natural resources. Rappaport, for instance, distinguishes between the “cognized model” of the Tsembaga — “the model of the environment conceived by the people who act in it” (1968:238) — and his own “operational” model, which indexes a material reality unacknowledged in the cognized model. Although the truth of the cognized model may be important to the people conducting the ritual, for Rappaport, “the important question concerning the cognized model […] is not the extent to which it conforms to ‘reality’ (i.e. is identical with or isomorphic with the operational model), but the extent to which it elicits behavior that is appropriate to the material situation of the actors” (1968:239).

This discrepancy — in which the “model” of the anthropologist is thought to work, while the “model” of the ritual participants is thought to be incidental — has been criticized. Particularly in the context of anthropology’s traditional field sites, the politics of computer simulation and representation are tied up with colonial practices of knowledge and power.2 While a computer model’s correspondence to ecological phenomena is taken to be evidence of knowledge or explanation, such explanatory capacity or intent is not allowed for the “cognized” (we might say emic) model.

In his work on ethnomathematics and ethnocomputing, Ron Eglash has criticized this tendency: anthropologists, when modeling phenomena like the Balinese water temples or emergent patterns of mud terraces in the low hills of Ecuador, have typically considered their mathematical or computational qualities to be unintentional. Such anthropologists reserve “intent” for individuals who can express their motivations or plans in Western mathematical terms and regard these collective, often long-term developments as happy accidents.

The use of computers to produce functional models of processes both ecological and mental raised significant questions about the role and scope of anthropological description and explanation. What was the connection between the model and the thing modeled? What kinds of descriptions could count as explanations as well? These philosophical questions, about the epistemology and politics of modeling, anticipated later critiques of anthropological knowledge practices. And, curiously enough, although computers were sometimes credited with helping cognitive science overcome the behaviorist paradigm, a kind of behaviorism has re-emerged in big data analytics. With the growth of data sets containing user interactions with websites and other services, people interested in “culture,” especially in commercial contexts, have returned to thinking of it in behaviorist ways.


  1. Cybernetics, especially as interpreted by Gregory Bateson, is an especially well-known part of the history of computing in anthropology. So, either ironically or appropriately, I don’t attend to it directly in this series. Maybe you, dear reader, can write a blog post about it. 
  2. For more on the politics of simulation, see the exchange between Stephen Lansing and Stefan Helmreich in Critique of Anthropology: Helmreich’s critique of Priests and Programmers, Lansing’s reply, and Helmreich’s rejoinder to Lansing. 

3 thoughts on “Cultural Ecology: Modeling with Computers

  1. Nick, thanks again for this series. In the conclusion of this segment, “With the growth of data sets containing user interactions with websites and other services, people interested in “culture,” especially in commercial contexts, have returned to thinking of it in behaviorist ways,” you may, however, be falling into the trap set by an anthropology-centric world view. Outside of anthropology departments, cybernetics merged with operations research to become the dominant metaphor of the Systems Thinking movement in management and organization theory. Gregory Bateson is often mentioned as a pioneer in this area; but a history of this movement would have to include, at least to my by no means authoritative knowledge, Gerald Weinberg’s An Introduction to General Systems Thinking, Peter Checkland and Jim Scholes, Soft System Methodology in Action, and Peter Senge’s Fifth Discipline. There is a huge amount of discussion, spanning a spectrum from rigorous to mystical, on LinkedIn. And, for anyone who would like to play with models to see how they work, there is Insight Maker [http://insightmaker.com/help], a free on-line tool that simplifies construction of both analytic stock-and-flow and agent-based models. In this field, too, as in cognitive science, anthropologists have been, except for a few pioneers, only peripheral players in thinking now dominated (again, just my own impression) by management consultants, biologists, and game designers.

    An interesting trail to follow here is the growing attention to the human element in systems thinking. Operations research was developed by the military to manage the logistics of fighting World War II. Attempts to apply it to postwar corporations ran into serious problems when systems thinking was extended beyond logistics. It turned out that how human actors envision their positions in large systems and respond to policy proposals that involve changes in organizational structure or corporate culture is a critical factor in whether such proposals achieve their goals or not. That is how systems thinking gets from “general systems thinking” to “soft systems analysis” (where “soft” includes the human element) and then to a “fifth discipline” and the idea that for systematic changes to work those who are part of the system have to be considered, briefed, brought on board, and, in the best of all possible worlds, start themselves to think in terms of systems instead of positions and isolated problems in need of immediate solution.

    Tragically, this sort of thinking came to late or encountered too much ignorance and/or resistance to affect the Vietnam War. During WWII, Ruth Benedict began The Chrysanthemum and the Sword with the observation that the Japanese were the most alien enemy that the US military had ever faced. Her evidence was casualty figures from WWII battlefields. Evidence from Europe suggested that killing or disabling between a quarter and a third of an enemy force would be sufficient to demoralize and defeat it. That Japanese would hold out and fight to the last man on islands in the Pacific shattered what had been taken to be military common sense. Benedict’s job was to figure out why the Japanese fought that way. By Vietnam, however, the sort of analysis that Benedict offered had been forgotten or was simply ignored. Robert McNamara’s whiz kids recruited from Ford used operations research models to calculate the tonnage of bombs required to make the North Vietnamese give up the fight. It may have been simple ignorance; I recall a course at Cornell where Lauriston Sharp told us that when the Gulf of Tonkin Resolution was passed, there were only three scholars in the USA who knew Vietnamese, two of whom were archeologists. But, for whatever reason, US planners grossly misunderstood Vietnamese psychology and a culture of resistance to overwhelming force cultivated by centuries of wars with China. They ignored this human element, and the US lost the war, with effects still reverberating through US political culture and the global system today.

    Yes, indeed, discrepancies between the observer’s “model” and the native’s “model” can have serious consequences. It is far from clear, however, that substituting the latter for the former will produce a better world. Anyone here for a New Caliphate or a modernized version of Confucian authoritarianism?

  2. This is a great point, John. I was thinking more narrowly of the kind of work I encounter in my fieldwork with people who design recommender systems, where the normal data input options are something like {skip, pause, thumbs up, thumbs down, quit}. In that case, and with a lot of the transactional data that people are using to do analytics on the web, you’ve got something that looks a lot like behaviorism, even when it’s been influenced by other schools of thought or when that data is used to produce models that behaviorists would have disapproved of (inferring mental states from clicks, or something like that).

    But, the argument you make here is, I think, a good one for not reducing these practices down to a single thing — to assume that because it’s done with computers or certain kinds of data, it’s necessarily part of some overarching epistemology. The different ways to approach systems theory, as you note, are just one example. (And that Benedict example is great — there is more to be done with thinking through her “ethnography at a distance” and the kinds of things the aforementioned data analysts might be trying to do.)

    The point of the “native’s model” is less about the idea that one should rule all and more about the politics in play when certain models are automatically given credence over others. I may be an unrealistic pluralist, but I think that systems designed for the inevitability of multiple, partial interpretations (however one might do that) are the way to go. One way toward that is to pick at the anthropological common sense idea that computational systems designed in the west are necessarily monolithic in terms of ideology/epistemology/what have you. Once we’ve knocked some crevices in there, we might grow more interesting possibilities.

Comments are closed.