Subscribe to our newsletter
Why And How to Innovate In Publishing To Change Academic Incentives
Without fail, in every “transforming scholarly publishing” conference presentation or panel discussion I’ve ever attended, someone has voiced the argument that progress towards making science more open is being held back by the conservatism of university tenure processes. I’ve heard university press directors say that they are reluctant to pursue open access monographs, for instance, because they fear OA books will be discredited by scholars and administrators as vanity published rather than peer reviewed. Researchers occasionally say that they can’t publish in new journals, OA or otherwise, until they have tenure, due to the tyranny of the impact factor. Of course, opening up science involves more than simply making traditional articles open access. New forms of scholarly communication that make research outputs like data and code available are sometimes viewed with suspicion because those outputs often aren’t citable and therefore won’t be “counted”. In effect, the argument goes, until senior scholars and university administrators become less conservative and more open-minded about what signals quality in published scholarship, we’re hamstrung.
Having spent some time in an administrative capacity behind the tenure review curtain, I beg to differ. What I saw was a process that made use of any available indicators of quality that could be readily understood and applied fairly in a comparative context. The problem is, that set of available indicators is too limited. Let’s park to one side the fact that universities differ greatly in their use of quantitative metrics (yes, some no doubt over-emphasize metrics) and on the extent to which they rely on other, more qualitative input such as peer letters. They all need and use readily available norms such as citation counts, journal impact factor, and publisher prestige to some extent. And, I would argue, as other sensible benchmarks become readily available, they will use those too.
That puts the onus on academic publishers to provide alternatives. Put another way, there is an opportunity for academic publishers to provide and legitimize new forms of scholarly communication and impact tracking. Beyond accelerating the progress of science, publishers should be independently motivated to do so to stay current with the expectations and needs (such as funder mandate compliance) of their authors and readers. On the open access side, many publishers have already stepped up to provide high quality open access journals. Some publishers, both commercial and non-profit, are now developing OA monograph programs as well, using a variety of funding models. OA monographs are not perceived as “vanity published”, so long as they are vetted, acquired, and edited with the same standards of rigor as non-OA works (and publishers communicate this fact effectively). There’s no evidence that tenure committees view novel business models from prestige publishers as reason to discount a tenure candidate’s book. What the committee may well expect, though, is a print copy of the book for reviewing convenience, and the typical quota of pre-publication book reviews, since such reviews are an important qualitative indicator in the social sciences, arts, and humanities.
The principle is no less valid for new forms of scholarly communication or alternative metrics. I firmly believe that innovative publishers can be a powerful force for change by legitimizing a broader representation of research and impact. Here are just a few examples.
(1) Creating more formal publication venues for non-traditional works like data and code is one way that publishers can help researchers broaden their record of scholarship. The Journal of Statistical Software is a good example. It promotes reproducible computation and software tools, and gives scientists a forum to publish the software they write, and to cite the software they use. We need more journals like this. The Journal of Visualized Experiments, similarly, allows researchers to publish and cite videos of techniques and scientific data. But all journal publishers should support authors who wish to post and cite data associated with their articles. By partnering with Figshare, publishers need not create this infrastructure themselves.
(2) Another way publishers can broaden the published record of impact is by including article-centric indicators in their own publications. This is exactly what Altmetric, was designed to do. Altmetric collects the relevant discussions around journal articles from mainstream news outlets, scientific blogs, social media, policy documents and many other sources. This information is made available to publishers for use on their own platforms. While Altmetric helps readers follow the stories surrounding the research, more importantly from my perspective, it also helps authors track downstream attention surrounding their own work. Altmetric is effectively empowering researchers to present other indicators of research influence within the evaluation context, and that’s a tremendous force for change. Slowly but surely, we are seeing researchers beginning to use the tool, starting their own revolution.
(3) An even more radical way for publishers to enrich the representation of scholarship that informs appointment review processes is by providing structured information about contribution to multi-authored works. With the growth in collaborative authorship across all fields of science, we are seeing long lists of authors, with, at best, unstructured information about who contributed what to the collaboration. This can be especially problematic for junior researchers and those who contribute in valuable ways that don’t typically merit top billing, such as providing code or data.
Nobel prizes in physics often go to people who discover new particles, and yet the key Higgs Boson paper has 3,171 authors. Before the announcement of the Nobel in Physics last year, there was a satirical posting in Scientific American’s blog reporting that the prize committee had announced the award would go to the particle itself because it wasn’t clear which humans actually deserved credit for the particle’s discovery. The relationship among authorship, invention and credit today is largely broken, but publishers can take the lead in fixing it by capturing information about contribution in a way that will be readily accessible to, and digestible by, tenure review committees. Several publishers are already engaged in Project CRediT, an open standard for expressing roles intrinsic to research.
These are just a few of the many ways in which publishers can help evolve academic incentive structures, by legitimizing an expanded record of scholarship and impact, making science better in the process. Whether or not publishers see this as an obligation, they should see it as good business. It won’t be long before authors (many of whom now choose open over paid access journals), along with readers and libraries, start selecting publications based on which publishers enable them to communicate and be credited for their contributions as they see fit.