What I Like About Science

2010 saw some interesting articles lately calling into question some of the most basic assumptions regarding the scientific method. In March there was an article by Tom Siegfried which argued that “the ‘scientific method’ of testing hypotheses by statistical analysis stands on a flimsy foundation.” Of course, the problem may not be so much with the method, but with the application. Siegfried’s point is that

Even when performed correctly, statistical tests are widely misunderstood and frequently misinterpreted. As a result, countless conclusions in the scientific literature are erroneous, and tests of medical dangers or treatments are often contradictory and confusing.

I’m no statistician, so I’ll let the more mathematically literate evaluate the claims in that article. I link to it because it resonates with what my former roommate (and frequent commentator on Savage Minds) once told me. He said that biological anthropologists frequently misunderstand the results of computer programs which produce genetic trees because they don’t properly grasp the underlying math. Some people argue that a similar problem nearly brought down the world economy.

Even when the science is done right, there are some serious problems that need to be addressed. When research isn’t published in fake peer review journals or ghost written by pharmaceutical companies, there are still inherent biases against publishing “negative results.” And even when everything is done right, strong empirical results are often impossible to replicate:

But now all sorts of well-established, multiply confirmed findings have started to look increasingly uncertain. It’s as if our facts were losing their truth: claims that have been enshrined in textbooks are suddenly unprovable.

But as the title should make clear, I link to these results not to debunk science but to praise it. Michael Bérubé has an intriguing review of the “science wars” in which he argues that both scientists and their critics have a shared interest in trying to move beyond the impasse of the nineties in order to face the twin threats of those who deny evolution (for religious reasons) and those who deny global warming (for economic reasons):

Fifteen years ago, it seemed to me that the Sokal Hoax was making that kind of deal impossible, deepening the “two cultures” divide and further estranging humanists from scientists. Now, I think it may have helped set the terms for an eventual rapprochement, leading both humanists and scientists to realize that the shared enemies of their enterprises are the religious fundamentalists who reject all knowledge that challenges their faith and the free-market fundamentalists whose policies will surely scorch the earth. On my side, perhaps humanists are beginning to realize that there is a project even more vital than that of the relentless critique of everything existing, a project to which they can contribute as much as any scientist–the project of making the world a more humane and livable place.

I sincerely hope that this is the case. What I like about science is that it is not afraid to ask tough questions. There is no reason to think that the scientific method can’t learn from all of the problems listed above and find ways to make scientific results even more robust than they were before. But I don’t think that science can do this on its own. These are also political problems, social problems, institutional problems, psychological problems, etc. and to find ways to make science better scientists will need to work together with anthropologists and others to find ways to overcome these problems. (See James Clifford’s talk on “The Greater Humanities.”) I like science because I think scientists understand this in the same way that the best economists understand that economics alone isn’t enough to solve economic problems.

14 thoughts on “What I Like About Science

  1. There are two sets of issues here.

    One has to do with the high-pressured world of big science and corporate or government-sponsored research: the pressure to be first with a breakthrough, to secure the patents, and to get new products or technologies into the marketplace is overwhelming. The temptation to cut corners, fiddle data, or misrepresent results is difficult to resist. As the idea of academia as an ivory tower where monk-like intellectuals have all the time in the world to refine their work becomes obsolete, cheating is bound to become more common.

    The other has less to do with failure to understand the math as with failure to understand that experiments test models and models are, inevitably, only partial representations of the reality being studied. The map is not the territory. Thus, for example, the experimental evidence for the law of gravity is well-nigh unimpeachable; but birds still fly and planes sometimes fall out of the sky. The law of gravity still holds, but other factors must be considered to explain why.

    This sort of problem is particularly acute in medical research since even the best statistical results do not reach the level of confirmation of experiments in classical mechanics, e.g., Galileo’s dropping cannon balls off the leaning tower of Pisa. Even when trends appear to be statistically significant, the effects of treatments based on available evidence may vary widely from one clinical case to another, for all sorts of reasons: age, gender, genetic predispositions, pre-existing conditions, interactions with other treatments, etc.

    Aristotle got it right. Too many of us (not just anthropologists, the public at large) have never read him or considered what he has to say in Nichomachean Ethics, Bk. 1.3

    Our discussion will be adequate if it has as much clearness as the subject-matter admits of, for precision is not to be sought for alike in all discussions, any more than in all the products of the crafts. Now fine and just actions, which political science investigates, admit of much variety and fluctuation of opinion, so that they may be thought to exist only by convention, and not by nature. And goods also give rise to a similar fluctuation because they bring harm to many people; for before now men have been undone by reason of their wealth, and others by reason of their courage. We must be content, then, in speaking of such subjects and with such premisses to indicate the truth roughly and in outline, and in speaking about things which are only for the most part true and with premisses of the same kind to reach conclusions that are no better. In the same spirit, therefore, should each type of statement be received; for it is the mark of an educated man to look for precision in each class of things just so far as the nature of the subject admits; it is evidently equally foolish to accept probable reasoning from a mathematician and to demand from a rhetorician scientific proofs.

  2. I like this post very much (I’ve liked all the ones I’ve read, but this is more in my domain). It seems like the Berube quote and your last paragraph are saying two slightly different things. Berube’s is that “scientists” and “humanists” need to work together to solve problems in the world, which seems correct and obvious. Yours is that anthropologists (and I assume psychologists, sociologists, etc) can help the “hard sciences” do their job better, which is to me more interesting to think about. It seems pretty clear at the level of improving the politics of how science is done (which cannot be separated from outside pressure or internal community dynamics no matter how much we try) that outside help would be useful. Do you have something more fundamental in mind?

  3. Great points. I once asked one of my stats profs how much of the what we know based on statistical probabilities are sound; using proper samples, obeying the underlying mathematical assumptions, etc… He said that this has actually been tested by statisticians, and of the top of his head it was probably not more than 15%, most likely less than that. Having studied stats, I can say that there are so many things that you can do wrong, that I’m very weary of anyone, including me, doing anything advanced without a full background in stats. I think it’s a mistake to teach MA students on course in basic stats, and how to use SPSS/SAS, without the absolute warning that they should never actually use what they’ve learned to do anything other than basic deductive tests of models they’re developing through direct observation. Even then they should only test models they’re actually developing without stats, because the standard practice in anth of what stats folks call “data peaking,” which is running tests of significance of raw data just to see what happens, or allowing a program to develop a regression model, rather than a researcher doing it, is damaging in ways that are very hard to explain. Just don’t do it!
    However, I actually think that the research shows that if one knows what one is doing, then things like regressions are pretty robust even if all the assumptions are not perfectly met. I think this is especially true in anthropology, which requires greater amounts of direct observation and testing of models in the real world. I think other social sciences get into a lot of trouble when they utilize only stats methods in a study, without any first hand knowledge of that they hell the numbers represent.

    I was talking to a Chemist in Costa Rica last week during a trip about the hard/soft science debate, and he was telling me about how they have to use statistical math as well, because when working with changes in matter, determining when one solution becomes another material has more to do with probabilities than absolutes. This Chaos theory/Mandelbrot Set way of understanding things was very familiar to me.
    Than being said, one should not use a hammer when a screw driver is needed, and I think newer methods of model testing like Agent-Based Modeling (it’s non-linear), are probably better for what most ethnographers actually do.
    E.g., I once used a multiple regression model in conjunction with ego networks I gathered in Dallas to test out whether a model I was developing on the effects of fear and danger on social networks was valid. I’m a little embarrassed to admit that as far as I’m aware, such a thing has not been done before (I asked the gurus of SNA), and while I think it’s sound, I’m not absolutely sure. What kind of saved it all was the fact that it was just one method and analysis in various others used to triangulate what was going on.

    So, rather than not using stats, because we cannot meet many of the mathematical assumptions perfectly in anthropology (that includes archaeology and bio-anth due to problematic sampling), I think the fact that we may be one of the few social sciences that can hedge many of the deficiencies of probabilistic science by not relying on them solely and therefore not falling victim the the “ecological fallacy” quite so much.

    So, again if the real issue isn’t stats, but the very bad habit of Post-Hoc statistical analysis, then don’t throw the baby out with the bathwater. Simply don’t do that shit, and smack your buddies when they do.

  4. Thanks John, Albion and Rick! Great comments.

    I just want to respond to Albion’s question:

    seems pretty clear at the level of improving the politics of how science is done (which cannot be separated from outside pressure or internal community dynamics no matter how much we try) that outside help would be useful. Do you have something more fundamental in mind?

    It really depends on what one thinks about when you use the word “politics.” One view would see politics as a distorting factor which prevents “pure science” from functioning as it should in according to theorists like Karl Popper. Another would see politics as constitutive of all human action and therefore of science as well. This is closer to what I would mean by the word, and fits with what you say about not being able to be “separated from outside pressure or internal community dynamics.”

    But saying this has deep consequences which *are* fundamental to how we do science (including how we interpret and use science after it is “done” by scientists). I’m not a Science and Technology Studies (STS) type, but there are a lot of folks doing STS who are either scientists themselves or who are working closely with scientists. Someone better informed (i.e. CKelty) could give a nice rundown of some of the ways these collaborations are working.

  5. This sounds like a pastiche of “I don’t know much about art, but I know what I like”!

    I have taught statistics all my life and I know something of their use and abuse. One major source for the method was plant genetics in the late 19th century brewing industry. “Student” of ‘student’s t-test’ was a Guinness excutive publishing anonymously. I have recently published in Anthropological Theory an account of the shift in fashion for models of distribution. Certainly the application of these methods to human subjects has been shaky from the start (after the second world war).

    I have a friend, a metals scientist, who uses some 30 equations of a certain type to measure small changes in a particular metal induced under experimental conditions. The British Treasury uses 150 of the same equations to measure the performance of the national economy.

    My conclusion is that the real scientists are unknowingly influenced by their participation in society (which ought to be a topic of some interest to anthropologists) and that the pseudo- or social scientists deserve to be exposed as the charlatans they are. Medicine lies somewhere between these extremes.

    The problem is that you need to know some maths and science to get anywhere in this field. But that didn’t stop Bruno Latour, did it?

  6. K. Hart: “My conclusion is that the real scientists are unknowingly influenced by their participation in society (which ought to be a topic of some interest to anthropologists) and that the pseudo- or social scientists deserve to be exposed as the charlatans they are. Medicine lies somewhere between these extremes.”

    While, I agree with that sentiment in broad terms, and I’m glad you put Medicine in the middle, because too many people forget that one, I think that such a broad characterization goes a tad too far. I remember reading Talib’s rants against “social scientists” in the Black Swan, and others, and think he was doing the same thing. When he was demonizing social scientists, he was basically talking about economists; and even then, those economists who live entirely in a linear quant world, and who think they could predict the future. I work with one such economist who is attempting to predict outliers in data, even though she’s smart enough to conduct a mix of qual and quant work. I scratch my head when we talk, and I try to explain that while she knows the math much more than I do, even I know that the math economists use can never predict outliers precisely because they are outliers. I turned her on to agent-based modeling, which is non-linear, and can be used to do such a thing to an extent. I’m with Michael Agar in his call for us to look more closely at this new way of developing models with computers.

    The second issue I would have with the above, is the current practice of “hard” scientists, like physicists, working for financial firms and trying to treat people like atomic particles.

  7. Rick, have you had a chance to look at Albert-Lazlo Barabasi’s new book Bursts?

    Barabasi, who is no slouch at this stuff, flatly rejects the notion that human beings are unpredictable. He points out that our habits make most individuals highly predictable and argues that what has been lacking to date is the data gathering and processing capability to track millions of individuals. Welcome to the world of ubiquitous tracking, from CCTV image processing to credit-card, passport, and medical record swipe recording.

  8. Aggregating more and more data seems orthogonal to the idea of predicting an individual’s behaviour though — presumably to do that you’d want to collect a lot of data about that one case. Collecting data from millions of people isn’t going to help you predict the behaviour of a single individual if the behaviour you’re trying to predict shows large variance, yes?

    If I collect data on what 10 million people prefer to eat for breakfast and find that there’s 100 distinguishable categories of result, the size of my data set is only going to give me an extremely rough, probably inaccurate prediction as to what any one particular person will have for breakfast. (Whereas if I collect data on 1000 breakfasts of a single person, I could probably make a pretty good prediction).

  9. Andrew, I don’t think you’ve grasped the point of what Barabasi is saying. You are starting from a conventional view of prediction that assumes that the point of the exercise is to identify categories. Historically, this has been essential because sampling only supported gross generalizations. Suppose, however, that you can track in detail every movement and every transaction in which millions of individuals are involved. With a few extreme outliers as exceptions, the individuals will mostly follow highly predictable patterns — as do, for example, the millions of Tokyo commuters who ride the trains and subways to and from work every weekday. Thus, if you know a few things about any particular individual, e.g., that she works for a bank in the Marunouchi district to which she commutes from a particular station in Saitama Prefecture, you can predict with great precision what trains she will be riding at what times of day. If you also know her shopping habits on Amazon and have access to her banking and medical records, great chunks of her life become equally predictable. The predictions are not limited to 100 categories. The clusters to which she belongs can be subdivided down to the level of the individual herself.

  10. John, no I haven’t heard of him. I will say that a lot of the nuance of various criticisms made by folks that study this sort of thing, like the sociologists of science, is often lost in the mix. What most interests me in what I do, is why people think what they do -how they get info., what they accept or reject, etc… This will tell me, more or less what they will do, i.e., how people are manipulated to act out of their socio-cultural environment; which, is I feel can ultimately be measured via material-economic processes.
    So, in looking at this stuff I find patterns of various camps cutting out the nuance, in favor of a more black-and-white view. What’s interesting is that most individuals seem to easily articulate a nuanced view of things they know about, but when they aggregate into groups as a whole they will become more likely to fall into more predictable patterns of behavior. It’s at the ends that vocal minorities develop with more straight-forward, myopic views, and it is these folks that will always carry the day, be it in politics, revolutions, or in academics (this is why politicians spend so little time on the middle). So, you’ve got a bunch of smart folks who know science, and probabilistic science well enough to know various weaknesses, and caveats, but who still readily admit that such ways are still the best we’ve got –or, at least defend them. From either extreme ends, all you hear is the weakness or the strength of such methods, rather than the “yes, but.”

    To answer your broader question, however, I would say that with such perfect, aggregated information coming in better prediction would definitely be possible, but then it would use non-linear methods, such as agent-based models, or other programs like Monte Carlo. This is essentially how meteorologists are able to predict the weather, but their ability to predict lessens the further in time you go out, even if you were able to have all the variables accounted for on earth before running the model. This is due to the nature of what the Chaos Theory explains, which would mean that we’d have to have supercomputers running simulations with all the variables in the Universe, because of the non-local nature of the universe. But, then you need to ask yourself what it is you want to predict, and what level of accuracy you need. If something can be predicted 95 times out of a hundred, then it’s not likely to be from random chance, and it may be all you need. If it was 95% of patients in a drug trial survive, then it’s probably not good enough. Let’s face it, for anthropology we’d be doing well to get to the point where we could all agree that it will be hot in August. I think it would be a much wiser use of our time to determine if genocide is highly likely under a certain set of circumstances, like the 1984 drought that helped set off the genocide in Darfur. In chaotic system one may not predict what will happen moment to moment, but a wider pattern over time will emerge, and it can be predicted.
    I was talking more about those social scientists that think they can predict the outcomes of extreme, non-linear systems like the current global market, because a single outlier can destroy the entire model. For us, we will never see a year of 2,000 inches of rainfall in an area, or a person 900 feet tall, so prediction is much more stable for us; whereas, the loss of a trillion dollars can happen in a minutes in the market.

  11. without case studies, you’re going nowhere

    • This thread suffers from two obvious logical failures — black & white thinking and hasty generalization.

    There’s the usual physical science (good) / social science (bad) split. Then useless generalizations about these categories get deployed in pointless ideological skirmishes.

    • Where are references to case studies? Are you talking about fraud and cheating? Honest error? Methodological insufficiency? Ideological interpretations which distort or misapply any empirical finding to (broadly speaking) a political end?

    • No question — certain approaches to anthropology and sociology are not scientific at all. They are avowedly anti-scientific. They appeal to “methods” which can not be examined empirically.

    Marvin Harris refuted forms of interpretation based on ideological commitments — marxist, idealist, religious. Numerous case studies appear in his major popular works: Cows, Pigs, Wars and Witches, Cannibals and Kings. His methodological critique of anthropological theory, along with his own scientific approach, appears in Cultural Materialism.

    • To counter evolutionary biology, specious reasoning abounds. Religious ideology and political power seeking deliberately work together to undermine modern evolutionary theory. Steve Gould did more than any other paleontologist to keep xian know-nothings from destroying the US public education system in the name of a disgusting fundamentalism. “Scientific racism”, exemplified by the notorious Bell Curve with its sophisticated statistical illogic still exercises a perverse influence.

    Steve Gould defends the scientific credibility of the “historical sciences” in which the so-called “scientific method” (based on repeatable processes) does not apply to inherently unrepeatable events. The introductory chapters in Wonderful Life present his views.

    the anti_supernaturalist

  12. Well, what I like about science is the blog Neuroanthropology.


    I get tired of the generalized and sometimes clueless ranting in many of the comments (and a few of the posts) here at SM. I am certainly guilty here, and I apologize for some of my more bone-headed comments. But I never get tired of good anthropological science, and Nueroanthropology is an excellent example of that.

  13. Hi Kerim —

    Sometimes a good internet connection is hard to find (they were building the hotel around us at the conference I went to just after writing that last post; internet was…spotty.) I wasn’t totally clear, but by “politics” I meant something along the lines of your second definition. I would be curious to hear about what sort of collaborations are taking place. (Or even where I can read about them).

    Most of the NYer article was about applications to psychology and medicine; the bit at the end about physics was off. If a test of gravity, in a regime where we have every reason to expect Newton’s laws to work, gives you an anomalous result, the answer is invariably that the the experiment isn’t actually testing what it is meant to test. Not that this stops theorists from putting out reams of articles trying to explain the result (an activity known as “ambulance chasing”) before the problem with the experiment is finally found (sometimes it is not so easy to find — it’s not always a question of incompetence, but that the experiments are really hard). And then there are the “fundons”, events with questionable statistical significance that might involve just the particle you’re looking for, right as the accelerator is about to be defunded and shut down.

Comments are closed.