Subscribe to our newsletter
Research Assessment, Business Applications of Metrics and the Importance of Context: My Thoughts on the First Scholarly Kitchen Webinar
Earlier this week, I participated in the first Scholarly Kitchen Webinar for SSP. I’m glad to say the webinar was really well attended and first impressions suggest it went pretty well.
The title, ‘The future of Metrics’ dovetailed well with some of the recent themes on the Perspectives Blog. Certainly, the Impact Factor (IF), which I’ve spent much of October writing about, was talked about a lot because you can’t talk about metrics without. The role of metrics in research evaluation was also a key topic.
The webinar was moderated by esteemed Head Chef and renowned goatee sporter, David Crotty. My fellow panelists were Diana Hicks, professor of public policy at Georgia Tech and Robin Chin Roemer, instructional librarian at the University of Washington. Hicks is first author on, and spoke about the Leiden Manifesto, which I posted the video version of fairly recently. She very graciously stepped in at the last minute when James Wilsdon, Professor of Science and Democracy at the University of Sussex, was forced to pull out after being called in to attend a government policy meeting associated with research evaluation.
Roemer spoke from the library position and gave some fascinating insights as to why librarians are interested in altmetrics and what concerns many of them have. According to Roemer, altmetrics give librarians new opportunities to better inform their decision making with quantifiable and contextual data, while also helping them show the context and value of their own work. She also highlighted the need to be clear on what we’re measuring and to be sure that data sets are complete; ideas that very much reflect the principles in the Leiden manifesto and metrics tide.
For my part, I took a slightly different approach, as requested by the moderator, I presented a series of case studies from Altmetric customers in which they themselves explained how they use both article level and alternative metrics both externally to support their end-users, members, and authors and also internally to support editorial and business decisions. I’ll spare you the details here, but the case studies my talk were based on are all on Digital Science’s resource page, along with a number of free research reports from the consulting team, and the occasional festive video.
After the talks, the only question that was asked came from the moderator, and it was not an easy one. Crotty pointed out that one of the reasons why we rely upon metrics is because they are scalable. For example, when selecting a candidate for a university job post with 500 candidates, it’s not practical to read every article published by every candidate, to assess the quality of their work. If metrics are used as a guide, or a way to assist in decision making, how do we maintain the human element and avoid seeding the decisions to the numbers?
The stony silence that followed only lasted perhaps a minute before Hicks decided to take a stab at it. She suggested in that example, perhaps the numbers are a first pass, helping to cut down the pool before zero’ing in on right candidate in a qualitative way. That’s a great point and gets to the concept of machine-enhanced decision making that is currently at the forefront of how we use meta-data. As is usual with these things, I thought of something to say on the topic an hour or two later.
It occurs to me that we might be at the beginning of either changing the definition of metrics, or perhaps needing a new word. Much of the new technology in scientometrics and altmetrics is not simply about counting. People obviously like a nice, simple number to compare things by, but the most value comes out of understanding the contextual and qualitative information provided by metrics tools.
Research assessment is the obvious example of this, with funding bodies increasingly requiring impact statements with a qualitative description of the impact of a body of work, rather than a score. That doesn’t mean that metrics tools don’t help here, in fact they’re vital because as they gather evidence of impact, modern tools don’t just count, but collate the snippets of content or ‘mentions’ as well as the links, presenting it in one place. Essentially, they do the legwork, so that the person making the decision can spend their time making meaningful judgements.
So there is no conflict between modern metrics tools and the desire to keep qualitative judgement a key part of the process because context in preserved. Having all the mentions in one place, is a massive time saver to those who want to read the sentiment and value behind the mentions themselves. Instead, the danger is that we ourselves, whether librarians, publishers, funders or researchers succumb to the temptation of just relying on the numbers and ignoring the rich meta-data set that lies behind it.