Psyche: #lfmf bro!
Ha! Just kidding.
The 30 March 2012 issue of the journal Science includes a news piece “Psychology’s Bold Initiative” on a possible moment of introspection for the discipline. Spurred on by some recent high profile academic fraud cases a cohort of scholars are leading a movement aimed at scrutinizing their field.
According to Science, many psychologists now feel that their field has a credibility problem. To my ear this underscores some of the differences between our disciplines and simultaneously calls to mind the critiques of Clifford and Marcus, et al.
The greater concern arises from several recent studies that have broadly critiqued psychological research practices, highlighting lax data collection, analysis, and reporting, and decrying a scientific culture that too heavily favors new and counterintuitive ideas over the confirmation of existing results. Some psychology researchers argue that this has led to too many findings that are striking for their novelty and published in respected journals – but are nonetheless false.
As sociologists and anthropologists are both prone to lament when the mainstream media wants a social scientist they turn to psychology. Even on the topic of human origins it seems the evolutionary psychologists have one up on the human ecologists and bioarchaeologists. Perhaps this envy has contributed to psychology being the punch line in some anthropological circles, but more profound than this are the very different ways the disciplines conduct research and the kinds of knowledge they claim to produce.
Whereas anthropology oft claims the mantle of science with a “but…” psychology appears to have no such hesitation. Rather than deconstruct the authority of science here, I will simply nod in the general direction of France. Psychologists self-identify as scientists. Being that they believe themselves to practice science it follows that one way they may right their vessel is to test the reproducibility of others’ conclusions.
This in and of itself is a radical notion. Reproducibility is one of the core principles of science but the current prestige economy does not reward this sort of work nor does the publication regime offer an outlet for its dissemination. What would be the incentive for expending one’s energies testing reproducibility in an academic culture that gives the highest rewards to new ideas? Just as limiting is the virtually unpublishable status of negative results, which may motivate scientists to identify false positives or structure their research agenda around what is publishable rather than what needs to be known.
A group of 50 psychologists have organized themselves as the Open Science Collaboration with the stated goal of systematically replicating recently published psychological experiments. This is very interesting to me. Instead of worrying about what the limitations of their field might be there’s a group out there setting up an empirical project to test where that limit is.
Jonathan Schooler, author of a study to be tested by the OSC, was quoted in the news story and I found his words to be very revealing.
I think one would want to see a similar effort done in another area before one concluded that low replication rates are unique to psychology. It would really be a shame if a field that was engaging in a careful attempt at evaluating itself were somehow punished for that. It would discourage other fields from doing the same.
Here is the bit that reminded me of Writing Culture. You don’t have to go far in the house of sociology to find scholars who perceive anthropology as having gone off a cliff in the 1980s, a historical moment epitomized by the radical reflexivity of Writing Culture. Even in anthropology its not hard to find old schoolers who think the whole thing has gone to pot and Geertz is the villain. But the kernel of the Writing Culture critique is the same as the OSC movement: its was a call for more empiricism not less.
Correct me if I’m wrong, but reproducibility doesn’t really seem to have a place in contemporary cultural anthropology. On the one hand this makes methodological sense. In ethnography I am my own instrument. The culture I experience is different than the culture you experience even if we’re in the same place at the same time.
But its worth asking again, once more with feeling, how is it that we believe what we read in the journals? I mean, we can disagree about the meaning of an event or whether this idea from Edward Said really goes with that idea from Richard Price. But for the most part if somebody describes Carnival in Trinidad don’t we accept that description and move onto the interpretation?
What the OSC has done is select studies from three high-impact psychology journals published in 2008. “They reasoned that articles published during this time frame are recent enough that most original authors can find and share their materials, yet old enough for the OSC to analyze questions such as whether a study’s reproducibility correlates with how often it has been cited subsequently.” So far the response from study authors has been positive.
What would it look like to refashion this testing of reproducibility on anthropological terms and check up on publications to see if authors really knew what they were talking about? I’m not talking about the rare case of outright fraud, where authors are willfully deceiving readers. But could you pick, say, an essay on Indonesian cross-dressers out of Cultural Anthropology and go to Indonesia and find what they were talking about?
I don’t know, that sounds kind of crazy. Who would pay for it?