It’s the Data, Stupid: What Elsevier’s purchase of SSRN also means

On Tuesday May 17, 2016, SSRN announced that it was being acquired by Elsevier. SSRN, the Social Sciences Research Network, is a widely used repository of scholarly articles that can be uploaded and downloaded by anyone. It is “open access” (that’s in quotes because SSRN’s approach to OA has always been partial and peculiar, and Monday’s news confirms that perception). Elsevier is, well, if you don’t know who Elsevier is, none of this will make sense to you.

Much of the reaction has focused on what will happen to all the papers in SSRN. Twitter has been alight and aghast with reactions, mostly of the form “this is not going to end well.” Many people are asking whether Elsevier will continue to do all the evil things it normally does: charge huge subscription costs to libraries, high download fees for single articles to individual readers, continue its assault on open access by shutting down access, etc. Some people are suggesting that it is time for a alternative, such as an ArXiv for social science research. All these things are good food for thought.

I would suggest to SSRN users, though, that they have nothing to worry about. I don’t think Elsevier cares about the papers. God knows they barely care about the 2000+ journals they publish, other than Cell and a few others—a fact which is obvious to anyone who has published in an Elsevier journal, or edited one. The papers in SSRN have no economic value, Elsevier can’t really claim ownership over them unless they were subsequently published in an Elsevier journal (which no doubt a substantial portion of them were). Some large number are probably published elsewhere which creates an interesting conflict. But ultimately Elsevier has such deep pockets that such law suits would disappear like lint and crumbs into their depths.

So what is the real value in SSRN? Data.

The data produced by SSRN is not terribly sophisticated stuff: number of papers and authors, number of downloads, number of citations, per paper and per author. Lots of other companies and services attempt to collect the same kind of data. But what makes SSRN specific is that it is a well known node in the network—we might say, in the discourse or mind-space—of social science.

Because it is “open access” and because it collects an impressively diverse range of social science in one place, SSRN’s data actually represents the world of social science scholarship reasonably well. There are lots of social scientists who don’t use it—none of my friends in sociology or anthropology do, for instance. Remember though: All models are wrong, but some are useful. SSRN represents better data about the impact of social science research than any single journal, or any publisher’s data (even Elsevier, with its hundreds of social science journals), because it has been built on the good will, apparent neutrality, and level playing field of an open access repository. Until Tuesday May 17, it was not tied to any particular publishing conglomerate or metrics company, and so its data represented social science research with a fidelity to reality that is expensive and hard to otherwise achieve–the result of the hard work of Gregg Gordon, his staff, and advisers to be sure.

Why is this valuable to Elsevier? Because it’s valuable to academics. Like the Impact Factor, which is also owned by a for-profit high-margin company—Thomson Reuters—SSRN is a valuable input in the bureaucracy of academic personnel. Academic administrators long ago gave up evaluating scholars based on quality or innovative research, and turned to evaluating “impact” instead. And impact is a sort of metaphysical quality that is not in the research itself, but in the circulation and reception of research—it can only be captured by metrics, which requires collecting data. The reason is obvious to anyone who works in the university: impact = higher rankings, higher rankings = more and better students, more donors, more reputation for the institution… all of which translates into the ability to hire more high impact researchers. If that kind of data is valuable to academic administrators, Elsevier is right to focus on collecting more if it, monetizing it, and selling it back to Universities. After all, the model of taking all the labor of academics, packaging it up and then selling it back to Universities and academics has worked really really well for them in the past.

Don’t take my word for it though, here’s the press release:

“Elsevier is actively linking data and analytics to its vast content
base in ways no other potential SSRN partner can match. By
connecting Mendeley, Scopus, ScienceDirect and its editorial
systems, they’re helping researchers get a more complete picture of
their research landscape. Institutions will also benefit with a
better view of their researchers’ impact.”

Translation: “helping researchers get a more complete picture” means something like “reducing impact to metric that can be applied to an individual scholar”; whereas “institutions will benefit” actually means “universities will pay big bucks for this data.”

What’s wrong with a big for-profit company producing such metrics? What does it mean if Elsevier owns this data? Here’s where there is no difference between the old private SSRN and the new publicly-traded for-profit Elsevier-SSRN. In neither case is something like “data skepticism” possible. The data is not available to the people or institutions or disciplines it purports to measure. It cannot be contested, it cannot be re-analyzed, it cannot be investigated, it cannot be downloaded. It just has to be trusted.

The problem with this—from any self-respecting academic’s standpoint—is that this is anathema to our very practice as academics: data is never the truth, data is never “raw,” data is always messy, dirty, constructed, interpreted, analyzed, re-analyzed, and uncertain. It’s garbage in, garbage out: if the data is bad, the analysis will be bad, and the only way to know that is for many people to see the data and contest it. At the end of the day, we academics prefer expert human judgment about data to unverifiable claims based on secret data that we cannot see.

Needless to say, this is not how for profit businesses think about data in the era of Big Data, data analytics, predictive analytics, etc. For a business, every data point is a potential strategic advantage in a competitive marketplace and they keep that data as close to the chest as possible. Think about the difference between the data the US Government produces about jobs numbers vs. the data that Monster.com or LinkedIn collects about the US labor market. Getting the former data is virtually a human right, getting the latter, well, good luck and Godspeed. We have the former because we need a universal, shared metric for policy making and for dispute about properly political issues like whether to invest in infrastructure or lower tax rates, or create stimulus. It doesn’t mean that LinkedIn or Monster.com should only use that data, or not collect their own (indeed, their data may very well be much better), but there is no way we are going to set public policy based on LinkedIn’s secret data—at least not in a functioning democracy. But that’s another issue.

Ironically however, this is exactly the world we academics have built for ourselves: a world where a very large number of judgments about quality, employability, promotion, tenure, awards, etc. is decided by opaque metrics collected by for-profit firms. I have no idea what Elsevier’s grand plans are for SSRN—they are probably a modest intermediate level of evil, and not the Advanced Evil they practice normally. But their bigger plan is to get out of subscription-based publication models all together. It’s a losing model: the pirates are doing a much better job, the academics want open access, the libraries hate them–no one would blame them for jumping ship (as clumsily and slowly as one of the world’s largest corporations can “jump ship”) into a strategy based in monetizing data and metrics. It’s obvious that they seek to do the same thing with their purchase of Mendeley and their expansion of their portfolio in the direction of metrics. And they are not alone: Research Gate and Academia.edu are competing for the same institutional attention and dollars by monetizing data about academics and their work produced through platforms that extract that data in return for “hosting” the nominally valuable part: our work.

So while we should be anxious about Elsevier’s immediate plans for SSRN—will it remain “open access”, will they do something evil with the papers—the real elephant in the room is that we ourselves–we scholars—are producing the market for data every time we insist on evaluating a colleague according to some hack metric like Impact Factors or SSRN downloads. It’s not that such metrics are bad in themselves (but cf. Goodhart’s law), but when they are inaccessible to skepticism or scrutiny, when they cannot be analyzed differently by different actors—we set ourselves up for a world were we buy access to data about ourselves that we cannot be sure represents us accurately, in order to make decisions—sometimes trivial, sometimes existential—about our careers and ultimately the quality of our work and the problems social scientists deem worthy of attention. SSRN will continue to be a valuable place to post papers—lots of your colleagues will see it there. Elsevier may even be magnanimous and keep all those papers “open access”—but we should in all accounts avoid taking their data seriously. Data skepticism needs to be part of the brave new world we are building, and that is not something the Elseviers of the world are interested in.

ckelty

Christopher M. Kelty is a professor at the University of California, Los Angeles. He has a joint appointment in the Institute for Society and Genetics, the department of Information Studies and the Department of Anthropology. His research focuses on the cultural significance of information technology, especially in science and engineering. He is the author most recently of Two Bits: The Cultural Significance of Free Software (Duke University Press, 2008), as well as numerous articles on open source and free software, including its impact on education, nanotechnology, the life sciences, and issues of peer review and research process in the sciences and in the humanities.

28 thoughts on “It’s the Data, Stupid: What Elsevier’s purchase of SSRN also means

  1. You said “the data just has to be trusted”. On the contrary, SSRN has been a part of the effort by the National Information Standards Organization altmetrics working group (Mendeley has, too) to develop a trusted framework for collecting, measuring, displaying, and reporting altmetrics data. http://www.niso.org/topics/tl/altmetrics_initiative/

    In addition, Mendeley is participating in Crossref’s Event Data service, which provides “community infrastructure to collect, store, and provide this raw data for anyone to access”. http://eventdata.crossref.org/

    So the data-mongering fears turn out to be somewhat unfounded.

  2. @mrgunn This is helpful. If I had to pick a poison, I suppose I would choose CrossRef and Altmetrics over something like Impact Factor. But corporations contribute to standards formation all the time (its in their interest to dominate the resulting standards, obviousl), and you probably can’t convince me that Elsevier doesn’t see an opportunity to craft its own proprietary metrics or metrics tools, or fancy metrics tracking dashboard, or whatever the plan is… which it hopes will become indispensable to Universities. Perhaps I am wrong about that, but the realty is that the company has created such vast amounts of ill will, that I doubt very much that “contributing to CrossRef” is going to make me any less suspicious of its motives.

  3. “…the real elephant in the room is that we ourselves–we scholars—are producing the market for data every time we insist on evaluating a colleague according to some hack metric like Impact Factors or SSRN downloads.”

    That’s it.

  4. Back in 2012 the AAA announced a partnership with SSRN which I wrote about on the blog. Anyone know what happened with that? I see that there is a page for AARN and it says it is directed by Louise Lamphere. Other than that I can’t figure anything out.

  5. I don’t know the answer, but one of the things that SSRN tried to make money doing was to forge “networks” in particular disciplines and then sell those networks to libraries–I think basically like virtual journals that would be useful to scholars keeping up somehow. It obviously wasn’t too successful since none of us know anything about it, but maybe someone from a library will enlighten us.

    If it’s true that SSRN was getting revenue from libraries–which is what a lot of OA publications and repositories either do are are trying to figure out how to do–then I wonder if libraries will cancel those funds ASAP, or are locked into funneling yet more money into Elsevier’s underground treasure room.

  6. I had an extended conversation with Dr. Lamphere about AARN at the 2012 San Francisco AAA’s and, respectfully, she didn’t seem to know what was happening with that either.

  7. I tend to agree with Kelty’s argument, and the timing of Elsevier’s purchase is precipitous given that Crossref announced this month that it will enable DOI registration for preprints by August 2016 and ORCID iDs have throughput to Crossref. There is a lot of social network data there to mine and sell as a service via Scopus and other products.

  8. I might also add that Elsevier is a member of CHORUS, a resonse to public access policies and systems like the NIH Manuscript System (NIHMS). Commerical publishers who fought congressional public access policies like NIH’s (that went into effect in 2008) prefer CHORUS because it leverages off their existing infrastructures that they can continue to control (and profit from). You will see that NSF – perhaps the largest federal funder of social / economic research – is working with CHORUS.

  9. @bibliophile Yes indeed, this is very true. On the one hand, it’s basically a good thing, insofar as the world of scholarship and scholarly communication needs standard systems in place not just for making work available, but for indexing it, making it discoverable, and making the metrics that none of us really trust anyways a little bit more trustworthy. On the other hand, my opinion is that corporations do this because they can’t otherwise work together, so they need intermediaries like NSF/NISO/CrossRef to give them a forum to reduce the cost of every one of them inventing their own systems, or re-inventing the wheel. That presumes, however, that we (scholars) actually want what they are standardizing. But these initiatives are pretty well distant from the reality of scientific work or the experience of working in a University or corporate research lab/institute–they are much more focused on the administrative management of universities and academic personnel. They will nonetheless have massive, mostly pernicious, effects on scholars in the name of “helping”,”improving”,”standardizing”, “streamlining,” etc. There are lot of well meaning people in CrossRef/NISO, so I don’t mean to malign them… I’m rather trying to point to the bigger picture here.

  10. Phil Cohen, of Family Inequality (and a sociologist at Maryland) is trying to start SocArXiv, working with Center for Open Science. Contact him if you’re interested in helping or following his progress.

  11. I’m all for mutual aid and the enthusiastic creation of alternatives—I did writ a book about one aspect of such things… but… I think if we want to make a change in this domain we need to think much bigger. ArXiv and bioArXiv exist already– they should be among the people thinking about expanding or setting up such things. Also: LaTeX.

  12. ckelty: “I think if we want to make a change in this domain we need to think much bigger.”

    We should do a series about this. Or a post. Or something. I’d love to hear what you have in mind. I’m going to have more time to dig back into all of this publishing/OA stuff this fall, finally.

    bob: thanks for the tip about Phil Cohen. I’m interested.

  13. Lest we forget, Thomson is shopping Web of Science. Impact factor will likely fall into new and hands looking to leverage more profit from analytics….

  14. Am I right that Invisible Hands are still pushing us around? Elsevier cares not a whit for the quality of papers or books or collaboration or cross-fertilization. There is also no concern for helping universities or scholars to address the grand apocalyptic issues looming more and more urgently over the poor, over the climate, over dealing with sociopath factions, over the choices between smart phones and wise phones. Could we get our faculty and students to direct their attention (as Wittgenstein and von Wright and Veblen suggest) to broad questions about what our current values are and what they could be. Von Wright’s current Centennial in Helsinki and later this year in Great Britain and the USA includes papers on these topics and philosophy is looking more and more like it belongs near the center. jwp at Humboldt.

  15. The very, very smart people at Author’s Alliance have pointed out that I may be too glib about the dangers this poses to the publications in addition to the data issues I raise. I wouldn’t want to minimize the kind of asshatery that Elsevier is now capable of at SSRN. To wit: read this list of things that Elsevier would need to do to maintain SSRN as open.

  16. ckelty: “I’m rather trying to point to the bigger picture here.”

    Yes, and quite right, but I think the question of “the reality of scientific work” can’t be abstracted from the administrative, for just what is the index of research data measuring (and producing), and what ought it? Should productivity (say in number of publications and their impact on a profession and discipline) be a measure of success or that supported research has reduced a problem (in say a health or other socioeconomic indicator improving)? Let me take just one example, NIH/NHLBI did a bibliometric analysis of their funded obesity research from 1983-2013 using Thomson Reuters InCites™ database and concluded it was substantially productive, in that, among other things, it increased 10-fold over 30 years, rather than the benchmark twofold of all other NHLBI research. During that same period, according to the USDA, Americans ate almost 20 percent more calories. As has been known, and research continues to show, many health problems, from cardiovascular to asthma and inflammation, can be reduced by changing one’s habits, including a healthy diet. That we are keeping our bad habits is arguably one reason health care expenditures continue to rise (by 5 percent from 2013 and 2014 in the US, double the US GDP over that time, and as Elizabeth Warren and her colleagues have pointed out, perhaps the biggest cause of personal bankruptcy). As a society, I would guess, we are offering the wrong carrots, overlapping, in some way, this blog’s series on meritocracy.

  17. Christopher: you wrote: “The very, very smart people at Author’s Alliance have pointed out that I may be too glib about the dangers this poses to the publications in addition to the data issues I raise.”

    I’m still trying to find where they point specifically to anything you have written about this. Could you provide a more direct link? Thanks.

  18. I apologize that I’ve led a bit astray of the topic, but the point I’m trying to make is that a lot of Public Access (not quite Open Access) policy is at the funder / supporter level, and the argument is often made that the public has funded the research so should not have to pay for it twice, etc. Yet, funders are quite preocupied with, as you say, the administrative tasks of monitoring and policing compliance to these polices, adding administrative tasks for their funded investigators, which may increase costs for students, etc. Funders are also using, which you note, bibliometric databases to analyze and thus evaluate the productivity of their funding. You are right to be concerned, as am I, that commercial interests – such as Elsevier’s – may control more and more of the infrastructure on which those tasks are conducted and metrics gathered. Yet, beyond that, are those metrics even the best ones to evaluate our public research and development, and thus the most effective-to-date, in my experience, argument for public access to research – its resulting productivity? I will try not to dig myself deeper in this hole now.

  19. @barbara oh they just sent me an email, and I wanted to make sure people knew what’s actually at stake with the copyright issues.

  20. “are those metrics even the best ones to evaluate our public research and development”

    no. but are any? Cf. again, Goodhart’s Law. I don’t begrudge anyone the right to try and measure impact, but we (scholars) fail, when we make such measures into targets–especially if they are secret, bad, and unaccountable metrics.

  21. Open access is good and important and should be kept as free as possible from corporate manipulation. However, in order to facilitate intellectual communication there is another, much less developed side to open access: open response. A perfect example of that is happening right here, with individuals contributing directly to an interesting discussion – and all without the benefit of peer review or time-consuming editorial changes (though SM’s policy of “moderation” is just a tad suspect). Isn’t free and open exchange the best way to evaluate the ideas contained in an article or book? And with existing technology, wouldn’t adding a “comments and replies” section to online journals and other publications be a simple matter? Yet anthropology publications that have gone online and now trumpet their magnanimous gesture of becoming open access draw the line at allowing readers to respond. Without free and open dialogue, those publications still adhere to an authoritarian policy that places them in the position of casting pearls before swine. Surely anthropologists can do better.

  22. Lee, I definitely agree that we can do something better, especially with all of the possibilities we have at our fingertips.

  23. “are those metrics even the best ones to evaluate our public research and development”
    “no. but are any? Cf. again, Goodhart’s Law.”
    Goodhart’s Law emerged, as I understand it, from monetary policy; that he argued once a metric is made a target, then players do their best to game the system. Of course, as I was taught, money is simply an IOU, and to fetishize it as a commodity or standard, in the tradition of another famous economist’s arguments, is to violate the ethical human relations those exchanged debts and obligations index. So, to answer your question, and to try to return to the point I did not make well with my examples, there may be some metrics that are better than others. As the Greeks held, our measure ought to be humanity in all its mortality, and not as is so often the case today, the reputed knowledge, via computed productivity and citability, of experts. As Bernard Williams, whom in my older age I am beginning to appreciate more, argued, health, for this reason, is a good that differs from that of others, whether wealth or socioeconomic productivity. In economics and philosophy this led to the capabilities approach, which I have my concerns about, for once one tries to index capability as a standard for nation states to reach, it can be gamed too, and in a Foucauldian sense, be used as a stick for self-discipline. I similarly have a problem with approaches like that of Wellcome Trust, a large funder of gold open access, who strives, in part, to measure the impact of their funded research by whether a funded study is cited in a clinical guideline or other form of public policy document, arguing that it thus has public policy relevance, and is thereby contributing to the public good. I think we need to look closer on the ground, as ethnographers do, whether humanity, and the world we inhabit, is really better off or not, as John W. Powell suggested. And in that sense, as Lee said, so much more eloquently than I could, a truly open and free exchange is the responsible medium for getting there. But how many of us are alienated from that exchange because of our lack of recognized expertise, secured more and more in highly exclusive – and expensive – networks of social power, some of those being social networks of scholars? I think we may actually be saying the same thing, except I would extend the picture broader than the world of scientists and scholars and their exchanged output.

  24. To extend the picture is a very good idea, indeed. It would also be good to step away from the idea that what we discover, think, and write about is ipso facto valuable to others. When I was hired by the Japanese ad agency where I earned my living for a dozen years, a wise woman named Alice Buzzarte, who was then the dean of English-language copywriters working in Japan, took me aside and said, “John, to succeed in this business you will need to develop a thick skin. You have to realize that at least three out of four of your brilliant ideas are going straight into the trash can.” Three out of four was, in retrospect, a very generous estimate.

    Returning, however, to broadening the picture. Yes, our world is shaped by the ideas that people have about it, and, yes, much of what we see in our world is a legacy of colonialism, imperialism, now neoliberal globalization. We need, however, to dig a bit deeper. Let me take, for example, Japan. Japan was one of the nations defeated in WWII that then went on to prosper mightily in the decades after the war. Why? Part of the answer was a highly educated population and a workforce familiar with modern technology. Another was the wartime destruction of much of the nation’s industrial base, so that reconstruction meant installing the latest equipment and producing state-of-the-art products. Japan shared both of these advantages with Germany, another nation that did well in the postwar years. Why, then, was Japan’s postwar growth an “economic miracle” that far outstripped Germany’s. Two additional factors come into play. Along with Taiwan, Japan was one of two places in Asia to undergo a successful, non-violent land reform, which transformed farmers in the countryside from a rural proletariat into smallholders who became firm supporters of conservative, pro-growth parties. Political stability provided the foundation for rapid growth. (Warning, in this context, “conservative” does not mean what it has come to mean in neoliberal orthodoxy.) And, yes, there was one more thing, an influx of foreign capital just at the moment that Japan’s economy was ready to take off? Why? Special procurements by the U.S. military to support US and allied troops fighting the Korean War. (Taiwan would benefit in a similar way from the Vietnam War.)

    All of these factors may pale, however, beside the huge residual demand for industrial products that the devastation of World War II created, a window of opportunity that both Germany and Japan were able to exploit. This is important to remember because, as William Grieder pointed out in Ready or Not, The Manic Logic of Global Capitalism, the postwar development model — (1) industrialize, (2) sell first to a ravenous US market several times larger than your domestic market, then (3) expand first sales then manufacturing to other more recently developing markets—is a mathematical impossibility in a world where the populations, and thus potential market size, of both China and India, dwarf that in the US. It might still work pretty well for sizable minorities who make up relatively small but in absolute terms a very large proportion of the population. It cannot work, unmodified, for the rural poor.

    What, then, of academia? Like Germany and Japan, academia has grown fat on the postwar baby boom and growth of Western economies. Both population and economic growth have slowed. Establishments cling to what they see as theirs, while newcomers find that the myth that higher education leads directly to upward social mobility is increasingly implausible. What arguments couched in terms of another modern myth, that all voices are equally valuable, can do to change this situation is far from clear.

    Yes, we need a broader perspective, and some deeper thinking as well.

  25. John, with apologies to ckelty for shifting the topic, permit me to respond to your comment. Part of Japan and Germany’s post-WWWII success was indeed, as you note, a byproduct of the US’s foreign policy, through instruments like the Treaty of Mutual Cooperation and Security between the United States and Japan and the Marshall Plan (European Recovery Program). The US understood that economic security and mutual cooperation in and with these countries were a necessity for international peace. Whether the recent shift in US foreign policy, in which economic aid is tied to national security interests (with a USAID representative at the National Security table), is stimulated through the same understanding is yet to be seen. But looking at FY2014 figures, one can see what experts see as security interests, the top five recipients in combined military and economic aid per person being Israel, Afghanistan, Jordan, West Bank/Gaza, and Lebanon, in that order.

    As for “What arguments couched in terms of another modern myth, that all voices are equally valuable” might play, I think it is critical that we analyze the very structure of that myth. Just as with money, language and cultural identity, I would argue are but IOUs, and to mistake the complex relations – whether historical or other – they index as fixed standards or values, will only lead to conflict. Value is fluid and emerges through exchange, and thus the mutual health and security of that exchange, through, in part, recognition of our shared, yet diverse, humanity, in all our glory and horror, and through equal rights to access and participate in the exchange, will materialize in actual peaceful growth, or so I reasonably believe.

  26. bibliophile, I share the dream that you articulate so welling your last paragraph. What I don’t see is how it is possible. We live in a world where decisions have to be made, from which authors we will read and which products to buy to steps that may be necessary to save the world from catastrophe. Every single decision requires a choice between better and worse, and frequently important decisions need to be made quickly if disasters are to be averted. Are those empowered to make decisions always right? One would have to be mad to think so. But a world in which we all hangout and sort out our differences through friendly, rational conversation? I see no evidence that any such place ever existed or ever could. What, then, is a pragmatist to do. I follow Richard Rorty. I think that equal opportunity and basic protections for health and welfare are a good thing. I favor political mechanisms that periodically reset the game and prevent the formation of rigid hierarchies. I see reaching those goals as difficult but possible and well worth fighting for. I really do wish that someone could show me how kumbaya becomes an organizing principle for a planet with more than seven billion inhabitants. I love moments of communitas. I treasure them because they are rare.

Comments are closed.