PLoS One, Stanley Milgram and the CAVE

This is rich. The online open access jounal Public Library of Science (PLoS) has just launched PLoS One–an experiment in post-publication peer review. Rather than extensive peer review prior to publishing research, articles submitted to PLoS One will be reviewed by one editorial board member for primarily “technical rather than subjective” concerns (I think they mean technical rather than substantive… or maybe they don’t). Then the published articles are opened up for peer review by readers–through annotations, discussion and ratings systems. I think this is the future for scholarly peer review, especially in fields where competition is stiff and time to publication is important (i.e. less so for anthropology than for computer science, but still)–and so long as these articles are primarily annotated, discussed, and rated by people who actually have some knowledge of the given field or topic, it could become a system that moves people towards a kind of research publication spectrum (multiple, frequent reports on a research project) and away from the kind of secretive, report-it-all-in-one (or get rejected) Important Journal. The idea of “open access” is here not just about making research available, but also about staking out research territory in a public way, testing research questions in a public forum, and hopefully, raising the bar on the kind of research that is reviewed by the Important Journals.

What I love even more about this is that the first article I looked at is a fascinating replication of Stanley Milgram’s famous Obedience experiments from the 1960s, in which research subjects thought they were participating in an experiment about learning, but actually it was about obedience to authority. The replication takes place not with real people, but with virtual humans generated in an immersive environment and seeks to study emotional and physiological response to the administration of painful shocks to a character that the subjects know to be “virtual”–though they interact with it through vision and speach (and through text in the control). Apparently, people get a bit shaken up by torturing virtual humans. Not a surprise really, but a very clever experiment.

ckelty

Christopher M. Kelty is a professor at the University of California, Los Angeles. He has a joint appointment in the Institute for Society and Genetics, the department of Information Studies and the Department of Anthropology. His research focuses on the cultural significance of information technology, especially in science and engineering. He is the author most recently of Two Bits: The Cultural Significance of Free Software (Duke University Press, 2008), as well as numerous articles on open source and free software, including its impact on education, nanotechnology, the life sciences, and issues of peer review and research process in the sciences and in the humanities.

5 thoughts on “PLoS One, Stanley Milgram and the CAVE

  1. When I was a kid there were basically two games to play on the giant mainframe computer at my dad\’s university. One was Star Trek, and the other was \”Dr. Sluggo\’s Torture Chamber.\” The goal of that game was to keep your victims alive so they could be tortured longer. Of course, with an entirely text-based interface there wasn\’t much to get upset about, but I just thought I\’d point out that one of the first computer games involved torturing virtual humans. There isn\’t much about the game online, but you can read the original code here.

  2. Splainkton, I’m curious, what exactly do you consider unethical about Milgram’s experiment–the implication of physical pain, the covered research issue or the context of the experiment wherein participants’ behaviour was set in relation with such of germans under the nationalsocialist dictatorship?

  3. virtual ethics, yes! And virtual IRBs and virtual consent forms too. In fact, can we just let the AIs negotiate with each other about ethics and leave us humans out of all that nonsense?

Comments are closed.