Subscribe to our newsletter
What’s so Wrong with the Impact Factor? Part 2
In last week’s perspective I asked the question ‘What’s wrong with the Impact Factor? Part 1’. Anybody who’s followed the debate over the years will be familiar with many of the common objections to the metric – Here’s an example of a blog post on the subject. But how valid are the common objections? Does the Impact Factor (IF) really harm science? If so, is the IF the cause or just a symptom of a bigger problem? Last week I focused on the mathematical arguments against IF. Principal among those is that IF is mean when it should be a median. This week, I’m going to look more closely at the psychology of IF and how it alters authors’ and readers’ behavior, potentially for the worse.
Impact Factor is a self-fulfilling prophecy
Whether we’re discussing, as we did last week, the propensity for highly cited papers to gather more citations, or whether it’s the fact that papers published in high impact journals are likely to get more citations simply because of the perceived value of the journal brand, IF creates a sort of feedback loop where articles in high impact journals are perceived to be better, leading to greater citations, which raises the prestige of the journal, and so on.
It’s worth noting that in Anureg Acharya’s keynote at the ALPSP conference about a month ago, he talked about research that he had done into changing citation patterns. The article is on arXiv, here. Acharya et al showed that the fraction of top cited articles in non-elite journals is steadily rising. Acharya’s central thesis is that this effect is due to increasing availability of scholarly content. The fact that scholars are not entirely limited to the collections in their libraries, but are able to access information both in Open Access journals and also through scholarly sharing, means that they are no longer limited to reading (and therefore citing) articles published in core collections.
Others would argue that with a flatter search landscape through services like PubMed, Google, and arXiv, the power of the journal brand for readers (although perhaps not for authors), is steadily eroding.
It’s a journal-level metric that is misused as an article-level one
The IF was originally designed as a way to judge journals, not articles. Eugene Garfield, the scientometrician who came up with the measure, was simply trying to provide a metric to allow librarians to decide which subscription journals should be in their core collections. He never intended it to be used as a proxy measure for the quality of the articles in the journal.
You can hardly blame the IF itself for not being a good measure of research quality. Nobody said it was. Or, at least they didn’t until recently. As Hicks et al point out in the Leiden Manifesto, the ubiquitous use of IF as a metric for research quality only really started in the mid 90s. So, if the metric is misused, that leads us to an obvious corollary.
It’s unfair to judge researchers on the impact factor of the journals they publish in
If we’re judging researchers poorly, we’re likely to be denying grants and tenure to people who could be making more of a contribution. However, the question is: can we blame the IF itself for that?
If the impact factor only became the ubiquitous measure of research quality that it is in the last 20 years, does that mean that publishing in Cell, Nature or Science was previously not important?
We can argue whether it’s gotten more or less important to publish in high impact journals in recent decades but one senior scientist said to me recently that getting ‘a really good Nature paper’ launched their career. The reality is that even before the IF became the juggernaut that it is today, articles in high prestige journals were always seen as a measure of research quality.
Impact Factor isn’t the problem
The problem isn’t the measure itself. Sure, there are issues with it from a statistical best-practice point of view and it seems to distort the way we value research, but I think that something else is at work here. The problem is that when researchers are evaluated, very often the venue in which they publish is taken to be more important than the work itself. If we’re going to judge research and researchers fairly against one another in the future and move past IF, that has to change.
For researchers and their outputs to be judged fairly, two things have to happen. Firstly, the trend towards article level metrics and alternative metrics for evaluation has to continue and be supported by librarians, publishers, funders and scholars themselves. The study from Google that I mentioned shows erosion of the citation advantage of the journal brand. It’s happening quite slowly and arguably only in terms of readership and citation, not authorship.
The second thing is more cultural and more subtle. When speaking to academics about the fact that assessment strategies are moving towards multiple measures and a broader sense of impact and value, this point is met with suspicion. I wrote a post a while ago about confusion around the concept of excellence in academia. I think the reason for suspicion is that reviewers on assessment panels are generally senior academics whose ideas of what constitutes good work are rooted in the age of the paper journal. If this is to change, funders, librarians and scientometricians must all do more to reach out to academics, and particularly those who sit on review panels. We need a clearer, more consistent message as to how assessment should be changing.
That’s what I think is wrong with Impact Factor, or rather how our obsession with it reflects a deeper problem. What do you think is the heart of the matter? Why are we so fixated on this overly simplistic metric? Is it really harming the advancement of knowledge? What can we do to change things? Please feel free to post a comment below. Alternatively, you can contribute to the conversation on twitter using hashtag #IFwhatswrong. Next week’s perspective will be a partly crowd sourced post built from the ideas and thoughts that everybody contributes.