Subscribe to our newsletter
Three Simple Ways to Improve the Tenure Process in the United States
Stacy Konkiel is a Research Metrics Consultant at Altmetric, a data science company that helps researchers discover the attention their work receives online. Since 2008, she has worked at the intersection of Open Science, research impact metrics, and academic library services with teams at Impactstory, Indiana University & PLOS.
Many institutions in the US and worldwide are changing the way that they assess academics for tenure. They’re making a move towards using a more holistic set of impact metrics (a “baskets of metrics”, if you will) and, in some cases, doing away with metrics like the journal impact factor (IF) as a sole means of evaluating their faculty’s work. In so doing, they are righting wrongs that have plagued academia for too long, particularly the misuse of citation-based metrics for evaluation.
In this post, I’ll discuss which metrics should be abandoned, which we should put into context, and what data we can add to tenure dossiers to make it easier to interpret the “real world” impacts of scientific research.
Citation-based metrics are not inherently faulty…
Using the journal IF as a means of evaluation, for example, can help librarians understand the importance of academic journals within the scholarly community (the purpose for which it was originally invented). And citations can help authors understand the scholarly attention that their journal article or book has received over time.
However, citation-based metrics offer an incomplete picture of the total impact of research. Citations can only shed light on the attention that articles and books alone receive, and only upon the attention of other scholars. These limitations can be problematic in an era when an increasing number of scholars are moving beyond the journal article to share their work (often in the form of data, software, and other scholarly outputs) and many universities and funders are asking scholars to consider the broader impacts of their research (on public policy, technology commercialization, and so on).
Moreover, some citation-based metrics are being used incorrectly. The journal IF, in particular, is a victim of this: it is often used to measure the quality of a journal article, when it (a) is a journal-level measure and therefore inappropriate for measuring article-level impact, and (b) cannot measure quality per se, but instead can help evaluators understand the far more nebulous concept of “scholarly impact”.
Luckily, a growing number of universities are changing the way research is evaluated for tenure and promotion. They are reducing their dependence on journal-level measures to understand article-level impacts, and taking into consideration the effects that research has had on the public, policymakers, and more.
These institutions are using metrics much more holistically, and I think they serve as an excellent roadmap for changes that all universities should make.
The current state of “impact” and tenure in the United States
At many universities in the US, researchers who are being assessed for tenure are required to prepare a dossier that captures their value as a researcher, instructor, and as a member of the larger disciplinary community. And at some universities with medical schools, a researcher’s clinical practice may also be considered.
Dossiers are designed to be able to easily communicate impact to both a jury of disciplinary colleagues (other department members and external colleagues within the same field), as well as university committees of faculty and administrators, some of whom may know little about the candidate’s specific research area.
There are many ways to showcase research impact and universities typically offer dossier preparation guidelines to help faculty navigate their options. And more often than not, these guidelines primarily recommend listing raw citation counts and journal IFs as a means of helping others understand the significance of their work.
But there are better and more inclusive recommendations for the use of research impact metrics in a dossier. Here are three in particular that I think form a good starting point for anyone interested in reviewing their own university’s tenure and promotion preparation guidelines with an eye towards improvement.
- Improvement 1: Remove journals from the equation
The journal that an article is published in cannot predict the quality of a single article. Moreover, journal titles may unfairly prejudice reviewers. (“I don’t recognize this journal’s name, so it must be rubbish.”)
So, why don’t more institutions disallow any mention of journal titles in tenure dossiers? Or make it clear that journal IFs should not be (miss)used as evidence of quality?
At the very least, instructions should require reviewers to read and evaluate the quality of an article on its own merits, rather than rely upon shortcuts like journal IFs or journal title prestige. And contextualized, article-level citation counts (which I’ll talk about more below) should only be used to inform such reviews, not replace them.
- Improvement 2: Require context for all metrics used in documentation
Tenure dossier preparation guidelines should make it clear that raw citation counts aren’t of much use for evaluation unless they’re put into context. Such context should include how the article’s citations compare to other articles published in the same discipline and year, and possibly even the same journal. Here’s an example of how such context might work, taken from an Impactstory profile:
Context can also include who has cited a paper, and in what manner (to acknowledge prior work, commend a study that’s advanced a field, and so on).
Some researchers include contextualized citation counts in their dossier’s narrative section, for example:
In 2012, I published my landmark study on causes for Acute Respiratory Distress Syndrome. The article has since been cited 392 times. (In contrast, an average epidemiology journal article published in 2012 has been cited only 68 times, according to Impactstory.org.)
How researchers might best document contextualized impact may differ from discipline to discipline and institution to institution, but context is vital if academic contributions are to be evaluated fairly. So, those creating tenure and promotion preparation guidelines should take care to include instructions for how to provide context for any and all metrics included in a dossier.
- Improvement 3: Expand your university’s definition of impact
Citations are good for understanding the scholarly attention a journal article or book has received, but they can’t help others understand the impacts research has had on public policy, if it has introduced a new technique that’s advanced research in a field, and so on. However, altmetrics can.
Altmetrics are loosely defined as data sourced from the social web that can help you understand how often research has been discussed, read, shared, saved, and reused. Some examples of altmetrics include:
- Downloads and views on PubMed Central,
- Mentions in policy documents,
- Bookmarks on Mendeley, and
- Software forks (adaptations) on GitHub.
Altmetrics are an excellent source of supplemental impact data to include alongside citation counts. They can help evaluators understand the effects that research has had on members of the public and other non-scholarly audiences (in addition to the non-traditional effects that research has had among other scholars, like whether they’re reading, bookmarking, and adapting others’ scholarship). They can also help evaluators understand the influence of scholarly outputs other than books and journal articles (for example, if a piece of bioinformatics data analysis software is being used by other researchers to run their own analyses). Altmetrics also often include qualitative data, so you can to discover who’s saying what about an article (providing some of the all-important context that I touched upon above).
Altmetric, the company I work for, offers a bookmarklet that makes it easy to find altmetrics from a variety of sources, including an important non-scholarly source: public policy documents. The report generated includes not only a raw count of the mentions an article has received in public policy documents, but also links out to the documents that mention it.
Here’s an example of an Altmetric report for an article that’s been mentioned in several policy documents:
Using the report data, researchers can more easily document the effect their research has had on public health policy, like so:
My 2012 landmark article on causes for Acute Respiratory Distress Syndrome has been cited in at least three public policy documents, including the World Health Organization’s report on recommendations for reducing mortality rates for children living in poverty.
Mentions in policy documents are just one of the ways scientists can showcase the many flavors of impact their work has had. Others include: has their article been recommended by experts on Faculty of 1000? Has their research software been widely adopted by others in their discipline? Are their peers discussing their recent articles on their research blogs? The possibilities for understanding the broader dissemination and applications of research are many.
An increasing number of universities and departments (including the University of Colorado Denver Medical School and IUPUI) are beginning to offer guidance on using (and interpreting) altmetrics data in tenure dossiers. Any instructions for using altmetrics should be careful to recommend including context (both “who’s saying what” about research, as well as how much attention a work has received in comparison to other research in a discipline) and steer candidates away from using numerical indicators like raw counts or the Altmetric score (which we created as a means of helping people identify where there is altmetrics data to explore, not to rate the quality of articles).
And of course researchers themselves need not wait for guidelines to include supplementary indicators like altmetrics in their tenure dossiers, to help paint a more complete picture of the impact their work has had. Researcher-oriented tools like Impactstory and the Altmetric bookmarklet, described above, can help them discover where their work is making a difference, and provide the contextualized data to help them share those impacts with others.
It’s time for the tenure and promotion process to get smarter and more nuanced. Let’s start by using relevant data to understand real world impacts, and manage how we use traditional data like citation counts to put scholarly impact into a better context.
Do you have ideas for how we could improve the use of metrics in tenure and promotion? Leave them in the comments below or share them with me (@skonkiel) via Twitter.