The Mr. Potato Head rankings

This past week, after years of research, the National Research Council released its rankings of graduate programs of all types, including anthropology. The results are free and available here. Scroll down and find, “A Data-Based Assessment of Research-Doctorate Programs in the United States: Data Table in Excel (2010)”. You have to provide your email address and then you are allowed to download the file. I had to allow my PC to enable macros in Excel in order to view it properly.

Immediately it was clear that things were different this go ’round. There are no ordinal rankings. To my untrained eye it seemed that programs were merely listed alphabetically. Bloggers dubbed it the “Mr. Potato Head” rankings because you could make of it what you wanted. Being the non-competitive type who went to a gradeless hippie college I embraced the results as fair (once I confirmed that my program was on there, at least).

A part of me wonders to what use are these rankings to be put? I mean, we academics always joke about the US News and World Report rankings, while some administrators and college applicants seem to fetishize them. Inside Higher Ed just published a news story about one school using said rankings in the hiring process! Clearly, these things matter a lot to some people but I am not one of them. In retrospect I was rather clueless to reputations or quality of programs when I applied to grad school. I lived in Raleigh, so I applied to Chapel Hill because Duke wasn’t accepting new applicants that year. Fortunately UNC turned out to be a great fit for me.

I messaged some friends on Facebook about UNC’s appearance on the alphabetical list and put it out of mind. However, I just received the October 1 issue of the journal Science (they always come late via snail mail) which leads with a very helpful news article on the methodological problems inherent in the NRC report and how one might go about interpreting it. My wife, an actual scientist who uses statistics and stuff, confirmed that I had at least picked up the gist of it. Now I can offer you, gentle reader, my interpretation.

The NRC report offers many measures by which to gauge a given grad program. Here I have chosen to focus on two of the most significant, the “R rankings” and the “S rankings.” Note that the two methods are completely different from one another and, as you will see below, yield very different results. Moreover, results for each ranking are given twice, one that measures a program’s potential for scoring high marks relative to competitors and one that measures its potential for scoring low marks. This is like how in college football favorite teams are measured twice, once by the coaches and once by the AP. A fan can scan both rankings to see what range his or her team falls in. This comparison is quite similar.

Below are the R rankings which are a measure of an anthropology program’s reputation as determined by a faculty survey. Some universities offer multiple programs and I’ve noted this in parentheses. For a given program it may be ranked as high as its position on the list on the left or as low as its position on the right. The NRC is 90% sure that its actual ranking will fall between those two measures.

1

HARVARD


HARVARD

4

1

U MICHIGAN-ANN ARBOR (Anthro)


U MICHIGAN-ANN ARBOR (Anthro)

5

1

CHICAGO


CHICAGO

5

2

UC-BERKELEY


UC-BERKELEY

7

2

UCLA


ARIZONA

7

2

ARIZONA


UCLA

9

4

U MICHIGAN-ANN ARBOR (Anthro and History)


PENN

13

6

PENN


NYU

19

7

NYU


PENN STATE

19

8

PENN STATE


EMORY U

27

8

U TEXAS – AUSTIN


U TEXAS – AUSTIN

29

9

UC-IRVINE


U WISCONSIN-MADISON

32

9

U WISCONSIN-MADISON


U NEW MEXICO

32

10

NORTHWESTERN


YALE

32

10

EMORY U


U WASHINGTON

32

10

UC-SANTA BARBARA


UC-IRVINE

35

10

U NEW MEXICO


NORTHWESTERN

35

10

CUNY GRAD CENTER


WASHINGTON U

36

11

STANFORD (Cultural and social)


INDIANA U

37

11

YALE


UC-SANTA BARBARA

39

11

INDIANA U


UC-DAVIS

41

11

U PITTSBURG


DUKE (Cultural)

44

11

COLUMBIA


U MICHIGAN-ANN ARBOR (Anthro and History)

45

12

U WASHINGTON


U PITTSBURG

45

13

STANFORD (Antho sciences)


COLUMBIA

46

13

UC BERKELEY/UC SAN FRANCISCO (Medical)


STANFORD (Antho sciences)

47

14

WASHINGTON U


U FLORIDA

47

14

UC-DAVIS


RUTGERS

47

16

DUKE (Evolutionary)


BROWN

47

16

U FLORIDA


U GEORGIA

47

17

SUNY BINGHAMTON


UC-SANTA CRUZ

47

17

RUTGERS


CUNY GRAD CENTER

48

18

BROWN


STANFORD (Cultural and social)

48

18

ARIZONA STATE


SUNY BINGHAMTON

48

19

U GEORGIA


DUKE (Evolutionary)

49

19

UC-SANTA CRUZ


ARIZONA STATE

49

21

DUKE (Cultural)


U VIRGINIA

49

22

U ILLINOIS – URBANA-CHAMPAIGN


U ILLINOIS – URBANA-CHAMPAIGN

50

22

U VIRGINIA


SYRACUSE

50

23

PRINCETON


U HAWAII – MANOA

50

24

SYRACUSE


PRINCETON

51

25

SUNY STONY BROOK


SUNY STONY BROOK

51

26

U HAWAII – MANOA


CORNELL

52

27

CORNELL


UC BERKELEY/UC SAN FRANCISCO (Medical)

54

30

UC-SAN DIEGO


UC-SAN DIEGO

54

30

RICE


RICE

57

33

U OREGON


U MASS – AMHERST

57

34

U UTAH


OHIO STATE

57

34

U NORTH CAROLINA – CHAPEL HILL


U NORTH CAROLINA – CHAPEL HILL

58

35

U MASS – AMHERST


U CONNECTICUT

58

36

OHIO STATE


U OREGON

60

37

JOHNS HOPKINS


BOSTON U

60

39

U CONNECTICUT


U UTAH

61

40

BOSTON U


MICHIGAN STATE

61

43

TEXAS A & M


SUNY BUFFALO

62

43

SUNY BUFFALO


TULANE U

64

44

MICHIGAN STATE


UC-RIVERSIDE

65

47

UC-RIVERSIDE


JOHNS HOPKINS

67

47

TULANE U


TEXAS A & M

69

54

U KENTUCKY


U KENTUCKY

72

54

KENT STATE (Biological)


WAYNE STATE

72

55

WAYNE STATE


WASHINGTON STATE

72

55

SUNY – ALBANY


SUNY – ALBANY

73

55

TEMPLE


TEMPLE

73

56

U TENNESSEE


KENT STATE (Biological)

74

56

WASHINGTON STATE


SOUTHERN METHODIST U

74

56

U MISSOURI – COLUMBIA


U SOUTH FLORIDA

75

56

SOUTHERN METHODIST U


U TENNESSEE

76

58

U SOUTH FLORIDA


U IOWA

76

59

U IOWA


CASE WESTERN RESERVE

76

60

CASE WESTERN RESERVE


U MISSOURI – COLUMBIA

77

61

U ILLINOIS CHICAGO


U ILLINOIS CHICAGO

77

62

BRANDEIS


U OKLAHOMA

77

64

SOUTHERN ILLINOIS U – CARBONDALE


BRANDEIS

78

64

U OKLAHOMA


SOUTHERN ILLINOIS U – CARBONDALE

79

68

U ALASKA – FAIRBANKS


U ALASKA – FAIRBANKS

79

69

U NEVADA RENO


U COLORADO – BOULDER

80

71

U COLORADO – BOULDER


U MINNESOTA-TWIN CITIES

80

71

U KANSAS


U WISCONSIN-MILWAUKEE

81

72

U WISCONSIN-MILWAUKEE


U NEVADA RENO

82

74

U MINNESOTA-TWIN CITIES


U KANSAS

82

77

AMERICAN


AMERICAN

82

Below are the S rankings, which are a quantitative analysis based on 20 program characteristics including: publications per faculty member, citations of those publications, % faculty with grants, % faculty interdisciplinary, % non-Asian minority faculty, % female faculty, awards and honors to faculty, average GRE of grads, % first year grads with full support, % first year grads with portable fellowships, % non-Asian minority grads, % female grads, % international grads, average number of Ph.D’s graduated ’02-’06, % completing degree in 6 years (!!), time to degree, % of alums placed in academics, availability of student work space, student health insurance, student activities offered.

As with the reputational rankings, program names are listed ranked as high as their position in the left column and as low as their position in the right column. The NRC is 90% sure that their actual rankings fall within that range.

1

PENN STATE


PENN STATE

2

1

DUKE (Evolutionary)


DUKE (Evolutionary)

3

2

HARVARD


HARVARD

5

3

STANFORD (Antho sciences)


STANFORD (Antho sciences)

9

4

NORTHWESTERN


NORTHWESTERN

11

4

UC BERKELEY/UC SAN FRANCISCO (Medical)


UC BERKELEY/UC SAN FRANCISCO (Medical)

13

5

U MICHIGAN-ANN ARBOR (Anthro)


U MICHIGAN-ANN ARBOR (Anthro)

15

5

WASHINGTON U


WASHINGTON U

18

6

EMORY U


EMORY U

20

6

STANFORD (Cultural and social)


UC-BERKELEY

22

6

SUNY STONY BROOK


PENN

23

7

UC-BERKELEY


UC-DAVIS

21

7

PENN


UCLA

21

7

UC-DAVIS


STANFORD (Cultural and social)

21

8

UCLA


SUNY STONY BROOK

23

8

ARIZONA


ARIZONA

21

8

UC-IRVINE


U MICHIGAN-ANN ARBOR (Anthro and History)

26

8

SUNY BINGHAMTON


UC-IRVINE

26

10

U MICHIGAN-ANN ARBOR (Anthro and History)


SUNY BINGHAMTON

25

11

U OREGON


NYU

32

13

NYU


U OREGON

31

15

UC-SANTA BARBARA


UC-SANTA BARBARA

41

15

U GEORGIA


U GEORGIA

36

15

U WISCONSIN-MADISON


U WASHINGTON

36

17

U NEW MEXICO


BROWN

40

17

U ALASKA – FAIRBANKS


U UTAH

48

19

U WASHINGTON


U TEXAS – AUSTIN

36

19

BROWN


U NEW MEXICO

39

19

U UTAH


U WISCONSIN-MADISON

42

19

U CONNECTICUT


U CONNECTICUT

39

20

U TEXAS – AUSTIN


U MASS – AMHERST

39

23

U MASS – AMHERST


CHICAGO

55

23

ARIZONA STATE


YALE

45

24

YALE


U ALASKA – FAIRBANKS

47

24

MICHIGAN STATE


INDIANA U

50

25

CHICAGO


MICHIGAN STATE

46

27

U ILLINOIS – URBANA-CHAMPAIGN


U ILLINOIS – URBANA-CHAMPAIGN

52

28

INDIANA U


CORNELL

49

28

CORNELL


ARIZONA STATE

53

30

U TENNESSEE


UC-SANTA CRUZ

58

31

UC-SANTA CRUZ


DUKE (Cultural)

55

31

SOUTHERN ILLINOIS U – CARBONDALE


U TENNESSEE

67

32

UC-SAN DIEGO


UC-SAN DIEGO

59

33

DUKE (Cultural)


CUNY GRAD CENTER

55

33

PRINCETON


U FLORIDA

64

34

U FLORIDA


SYRACUSE

62

34

JOHNS HOPKINS


JOHNS HOPKINS

63

35

SYRACUSE


WAYNE STATE

62

35

WAYNE STATE


RUTGERS

65

35

WASHINGTON STATE


PRINCETON

63

35

UC-RIVERSIDE


WASHINGTON STATE

64

35

U COLORADO – BOULDER


UC-RIVERSIDE

69

36

CUNY GRAD CENTER


U KENTUCKY

60

37

U KENTUCKY


U SOUTH FLORIDA

65

39

RUTGERS


SOUTHERN ILLINOIS U – CARBONDALE

63

39

TEXAS A & M


U PITTSBURG

71

40

U HAWAII – MANOA


U VIRGINIA

68

41

BOSTON U


U HAWAII – MANOA

68

43

U SOUTH FLORIDA


BOSTON U

67

43

U PITTSBURG


U COLORADO – BOULDER

66

43

U MISSOURI – COLUMBIA


U NORTH CAROLINA – CHAPEL HILL

73

47

CASE WESTERN RESERVE


TEXAS A & M

73

49

U VIRGINIA


OHIO STATE

67

49

U NORTH CAROLINA – CHAPEL HILL


U MISSOURI – COLUMBIA

71

49

OHIO STATE


CASE WESTERN RESERVE

70

52

U OKLAHOMA


U OKLAHOMA

73

53

U WISCONSIN-MILWAUKEE


COLUMBIA

75

54

SOUTHERN METHODIST U


U WISCONSIN-MILWAUKEE

75

59

COLUMBIA


SOUTHERN METHODIST U

73

60

RICE


RICE

75

61

SUNY – ALBANY


SUNY – ALBANY

75

62

U ILLINOIS CHICAGO


U ILLINOIS CHICAGO

75

64

SUNY BUFFALO


SUNY BUFFALO

75

67

TULANE U


TULANE U

77

67

U KANSAS


U KANSAS

80

68

KENT STATE (Biological)


AMERICAN

82

72

U NEVADA RENO


TEMPLE

82

75

AMERICAN


KENT STATE (Biological)

82

75

BRANDEIS


U NEVADA RENO

80

76

TEMPLE


BRANDEIS

81

77

U IOWA


U IOWA

82

78

U MINNESOTA-TWIN CITIES


U MINNESOTA-TWIN CITIES

82

Matt Thompson

Matt Thompson is Project Cataloger at The Mariners’ Museum in Newport News, Virginia, and currently working on a CLIR ‘hidden collections’ grant to describe the museum’s collection of early 20th Century photography. He has a doctorate in anthropology from the University of North Carolina and a Masters in information science from the University of Tennessee.

17 thoughts on “The Mr. Potato Head rankings

  1. Right off the bat, what struck me was how my personal prejudices against certain schools (based on whatever, who knows where I come up with these biases) are not reflected at all in these rankings. I mean, I knew UT Austin was a good school and all, but I never thought of it as a top 10 school. New Mexico either and I’m a Native North America specialist. I didn’t even know that Emory had an anthro grad program, much less an elite one. Anyways, that’s my confession. What surprises you?

  2. The Chronicle of Higher Education had a breakdown of all the data that went into the S ratings. The data collection for the S ratings seems especially flawed for anthropology because it does not break down the data by sub-discipline. Some departments might be great statistically for one sub-discipline but not for others. Additionally, some statistics like publication rates are going to vary between sub-disciplines. Cultural anthropologists are probably going to naturally have less publications than archaeologists or bio anthropologists because of the nature of their research.

    Another odd thing is that some departments are missing from the data. Since the data is from 2006, some of this is understandable because some programs (like the University of Wyoming’s) did not exist at the time. However, I’m pretty sure that the University of Arkansas and UNLV had programs back then so why are they missing.

  3. The way the exercise as a whole was envisioned seems a little off base. To me it’s like pooling the programs training MDs, EMTs, physical therapists, RNs, dentists, pharmacists, and veterinarians and then trying to determine their relative quality. That kind of study could be done, I guess, but does it really make sense

    The S ranking looks like it privileges programs with a relatively higher percentage of faculty members in biological anthropology and archaeology and secondarily privileges medical anthropology.

    My knee doesn’t jerk very easily but those non-Asian metrics just seem odd to me.

  4. Anyone who has followed the controversy over computer science departments will appreciate what garbage this new set of rankings is. By contrast to the last NRC rankings – where there was some general agreement within Anthropology over the accuracy of the top 10 departments – this new set of rankings is bizarre, and seems wildly out of sync with department reputations. The ‘S’ ranking of Chicago down at 24 seems absurd (especially with its high ‘R’ ranking), and the high ranking for Harvard – which has always been known as a department in which the graduate students are better than the faculty (Michael, you’re the exception!) – is a strange anomaly. Penn State at #1? UNC at 49, lower than the U. of South Florida??? Gimme a break.

    There are many reasons for these weird outcomes, including the tortured methods used to arrive at these rankings, but the first thing that struck me was the wildly inaccurate data – for my own department, for example, the true average number of publications per faculty is at least 10 times the number calculated by the NRC, and that seems to be due to the fact that the NRC used a science journal database to pick up publications. Many cultural anthropology publications were not counted, so departments with relatively more biological anthro and archaeology got higher means. Cultural anthropologists in my department also publish in non-U.S. journals, many in host countries in local languages, and many of these do not seem to have been picked up either. Garbage in, garbage out.

    The lingering problem is that the rankings do matter – top 20 departments on either scale will get rewarded by university administrations, bottom 20 department are likely to have resources withdrawn. We will all be scrambling to spin these outcomes (except for Penn State, which is probably rejoicing) for our deans, to save our programs, and my department chair has already formed one more damn committee to re-do the whole thing to try to persuade the administration that we’re not that bad. We’re not.

  5. … and as you find yourself debating the metrics, the formulas and measures, as you quibbling over ranking, marking deviations from the true, you numb to that nagging unease you had at the outset, something you couldn’t quite grasp – a dark figure lingering in the shade trees at the edge of the field, watching sadly – for you have now entered the Era of Endless Audit.

  6. The only part of these rankings that seem at all objective is the “Research Activity” score, which is based on publications and grants (and not mentioned in Matt’s post…..). The R and S are much softer measures, and they don’t correspond well with my notion of “quality.” But if Barbara’s observation is correct, about how they chose the journals for the citation data, then this would produce a massive skew in the data.

    Back in the good old days (when was that? I forget; must have been before my time), perhaps there was justification for the idea that “everyone knows” what the good programs are. But now, we have more graduate programs, more faculty, much more specialization, information overload, etc. I wonder if anyone in anthropology has the broad knowledge to have a reasonable impression of the accurate quality rankings of anthropology PhD programs in general. I feel that I have a reasonable idea about this within anthropological archaeology, but I wouldn’t even venture a guess about the situation for anthropology as a whole. And things vary greatly depending on one’s views of the nature of anthropology and scholarship (should we be judged with the natural sciences, with the social sciences, or perhaps with the humanities?).

  7. First let me just caution against bad mouthing anybody’s program. American, which is at the bottom of the lists, has some fine faculty and its location in DC boasts plenty of resources for its students. New Mexico too, I had no intention to slight UNM, has outstanding people that are leaders in their field. You’ll get no snobbery from me.

    And its in turning to subfield and specialty that I think the limits of these rankings are revealed. As my friend Lee Baker told me, “It’s not where you earned your degree, but who you know.”

    Just to take UNC as an example our faculty counts Arturo Escobar among its ranks, a scholar at the very top of his field (not someone I actually know, btw). If Escobar’s thing is what you do then you will really want to study with him whatever NRC thinks of the department generally. And if you want to hire somebody who does Escobar’s thing then a candidate with a letter from him is going to look really great. So in that very narrow world of HIS specialty UNC is tops.

    I’m sure many, many other schools have similar stories, that this is the place you want to go if you want to do… Peru, Siberia, cyborgs, traditional art, folklore, whatever.

    Which again raises the question: what is the utility of this (or any) ranking?

  8. Also, it’s interesting to note who is absent from the list.

    Grad Student Guy already mentioned Wyoming, Arkansas, and UNLV. I would add College of William & Mary.

    Who do you notice as absent?

  9. Anyone who has followed the controversy over computer science departments will appreciate what garbage this new set of rankings is. By contrast to the last NRC rankings – where there was some general agreement within Anthropology over the accuracy of the top 10 departments – this new set of rankings is bizarre, and seems wildly out of sync with department reputations.

    I think there are justifiable criticisms of this ranking, but I am not convinced that results ”wildly out of sync with department reputations” is one of the better ones. It seems much like a French vintner who argues that if it’s not AOC then it is second tier by definition. Holding the OBN to criteria not chosen in-house seems to me to be one of the better reasons for doing this sort of comparison.

    I would add College of William & Mary.

    And USC (the one in Columbia), perhaps because their PhD program didn’t get going until 2005.

    Who do you notice as absent?

    Programs awarding the Master’s but not the PhD as well as all Canadian programs. Both are disqualified by the general parameters of the comparison, of course, but…

    Which again raises the question: what is the utility of this (or any) ranking?

    both are options for would-be graduate students. That certainly limits the utility of the ranking for them.

  10. D’oh – my mistake. CUNY is on the list, I just missed it the first 7 times through. Sorry for the comment waste…

  11. MTBradley comments

    “I think there are justifiable criticisms of this ranking, but I am not convinced that results ”wildly out of sync with department reputations” is one of the better ones. It seems much like a French vintner who argues that if it’s not AOC then it is second tier by definition.”

    No; the criticism is precise. The R ranking and the S rankings are out of sync for program after program, and that is a nice illustration of what I mean. Don’t forget that the NRC started with a reputational ranking of departments and then tried to parse what variables would predict those reputations – they did a miserable job, as their own outcomes reveal, partly because the variables that might be predictive in physics aren’t necessarily the best to use in anthropology.

    And I think the French vintner analogy is flawed: reputation is not “by definition,” reputation was not fixed by one survey years ago, and it changes regularly. We are all familiar with departments that hired several heavy hitters and became powerful overnight. The ups and downs (and back up again) of Columbia’s anthropology department is a good example. One frequent criticism of the NRC rankings is that it tapped into “reputation” in 2004-2005, and in 2010 reputation has changed as departments in all disciplines have changed. That’s a second sense in which the rankings of the NRC study are out of sync with reputations, and a perfectly valid criticism. When it takes five years to produce a study of a moving, changing field, that study is way out of date.

    I was trying to draw attention to a larger problem. Matt’s example of UNC is a good graduate student perspective, and makes sense. Mine is a faculty point of view, where allocation of resources is critical, and every institution has to make hard choices about the allocation of funding, faculty lines, TAships and fellowships, etc. Faculty at Chicago – my alma mommy – rejoiced when they tied for first place in the 1994 NRC rankings, and I suspect that they are mounting a significant effort to account for their apparent decline in the S rankings this time. They’ll argue that it’s not valid, and I think that’s right. But will the dean of the Division of Social Sciences be persuaded when other UC departments may have risen in the new rankings?

  12. The R ranking and the S rankings are out of sync for program after program, and that is a nice illustration of what I mean. Don’t forget that the NRC started with a reputational ranking of departments and then tried to parse what variables would predict those reputations – they did a miserable job, as their own outcomes reveal, partly because the variables that might be predictive in physics aren’t necessarily the best to use in anthropology.

    I haven’t gone through the report with a fine-toothed comb but I read it to say that each field was evaluated via a distinct set of variables. Not only that, but faculty were central to the generation of those variables. From p. 67 of the report:

    The committee was keenly aware of the complexity of assessing quality in doctoral programs and chose to approach it in two separate ways. The first, the general survey (S) approach, was to present faculty in a field with characteristics of doctoral programs and ask them to identify the ones they felt were the most important to doctoral program quality. The second, the rating or regression (R); approach, was to ask a sample of faculty to provide ratings (on a scale of 1 to 5) for a representative sample of programs and then to ascertain how, statistically, those ratings were related to the measurable program characteristics. In many cases the rankings that could be inferred from the S approach and the R approach were very similar, but in some cases they were not. Thus the committee decided to publish both the S-based and R-based rankings and encourage users to look beyond the range of rankings on both measures. Appendix G shows the correlations of the medians of the two overall measures for programs in each field. The fields for which the agreement between the R and S medians is poorest are shown in Box 4-1.

    A review of said fields would seem to belie the notion that the poor overlay between the R and S rankings is about a humanities-like discipline being evaluated via a hard science rubric.

    Animal sciences
    Ecology and evolutionary biology
    Kinesiology
    Pharmacology, toxicology, and environmental health
    Civil and environmental engineering
    Mechanical engineering
    Operations research, systems engineering, and industrial and engineering
    Classics
    Comparative literature
    French and Francophone language and literature
    German language and literature
    Philosophy
    Spanish and Portuguese language and literature
    Astrophysics and astronomy
    Statistics and probability
    Agricultural and resource economics
    Communication
    Linguistics
    Public affairs, public policy, and public administration
    Sociology

    After looking at that list I’ve abandoned my initial notion that the R/S difference is about a within group variation unique to anthropology. I mean, if the rankings for evolutionary biology and statistics display similar issues how could that be the case?

  13. What I love about the rankings is the care that the authors have gone through to make sure that they have reached 90% confidence about the results — as if Being A Top Program is an essence that inheres in an institution and can be detected indirectly through sufficiently fancy statistics. Doubtless part of my skepticism is due to a lack of understanding of the methods, but there seems to be little awareness of the way that the object of analysis is constructed rather than simply discovered. Remind me again why qualitative methods are inherently ‘less scientific’ and ‘more problematic’ than a bunch of numbers?

  14. MTBradley wrote:

    “A review of said fields would seem to belie the notion that the poor overlay between the R and S rankings is about a humanities-like discipline being evaluated via a hard science rubric.”

    Thanks for the correction. I was thinking of the use of the Thompson-Reuters service for picking up journal article citations. It works well for biology or physics, but an analysis by our department chair indicated that 65% of the articles published by our department members in the NRC study period were not included in the data, because they are not indexed through Thompson-Reuters. Archaeology and biological anthropology journals are covered very well, but cultural anthropology is poorly represented, especially journals in countries that host a lot of our research, non-English language, but even major English-language cultural anthro journals that may lean more toward the humanities/cultural studies pole.

  15. On the Thomson-Reuters database: Archaeology is certainly NOT well represented in their rather meager collection of journals. Most of the journals I regularly publish in are not included. Now perhaps archaeology is better covered there than cultural anthropology, but in no way can one claim that archaeology is covered “very well.”

    My “h-index” in the Thomson-Reuters database is 8. I once did a rough calculation using Google scholar, where my h-index turned out to be almost double that score (this was a real pain in the rear; “Michael Smith” has published more than 34,000 articles!). An h-index of 8 means I have published 8 journal articles that have been cited 8 or more times each. I think these measure are close to worthless in a field where most journals are not included in the main database, but I use this to second Barbara’s point that the standard citation data leave out much of what we publish, and also to support my assertion that archaeology is very poorly covered.

    Within anthropology, physical anthropology is the clear outlier in terms of coverage in Thomson-Reuters.

  16. Thanks, Michael — I was reporting the comment made by our department chair, and I have not examined the Thompson Reuters journals for archaeology personally. It does not surprise me that archaeology is not as well represented as I was led to believe.

Comments are closed.