This is rich. The online open access jounal Public Library of Science (PLoS) has just launched PLoS One–an experiment in post-publication peer review. Rather than extensive peer review prior to publishing research, articles submitted to PLoS One will be reviewed by one editorial board member for primarily “technical rather than subjective” concerns (I think they mean technical rather than substantive… or maybe they don’t). Then the published articles are opened up for peer review by readers–through annotations, discussion and ratings systems. I think this is the future for scholarly peer review, especially in fields where competition is stiff and time to publication is important (i.e. less so for anthropology than for computer science, but still)–and so long as these articles are primarily annotated, discussed, and rated by people who actually have some knowledge of the given field or topic, it could become a system that moves people towards a kind of research publication spectrum (multiple, frequent reports on a research project) and away from the kind of secretive, report-it-all-in-one (or get rejected) Important Journal. The idea of “open access” is here not just about making research available, but also about staking out research territory in a public way, testing research questions in a public forum, and hopefully, raising the bar on the kind of research that is reviewed by the Important Journals.
What I love even more about this is that the first article I looked at is a fascinating replication of Stanley Milgram’s famous Obedience experiments from the 1960s, in which research subjects thought they were participating in an experiment about learning, but actually it was about obedience to authority. The replication takes place not with real people, but with virtual humans generated in an immersive environment and seeks to study emotional and physiological response to the administration of painful shocks to a character that the subjects know to be “virtual”–though they interact with it through vision and speach (and through text in the control). Apparently, people get a bit shaken up by torturing virtual humans. Not a surprise really, but a very clever experiment.
When I was a kid there were basically two games to play on the giant mainframe computer at my dad\’s university. One was Star Trek, and the other was \”Dr. Sluggo\’s Torture Chamber.\” The goal of that game was to keep your victims alive so they could be tortured longer. Of course, with an entirely text-based interface there wasn\’t much to get upset about, but I just thought I\’d point out that one of the first computer games involved torturing virtual humans. There isn\’t much about the game online, but you can read the original code here.
Do we need virtual ethics to govern the virtual replicaton of unethical experiments like Milgrams?
Splainkton, I’m curious, what exactly do you consider unethical about Milgram’s experiment–the implication of physical pain, the covered research issue or the context of the experiment wherein participants’ behaviour was set in relation with such of germans under the nationalsocialist dictatorship?
virtual ethics, yes! And virtual IRBs and virtual consent forms too. In fact, can we just let the AIs negotiate with each other about ethics and leave us humans out of all that nonsense?
FYI, Eric Kansa has a very good post about PLoS over at Digging Digitally