Subscribe to our newsletter
Exploring the ALPSP – Search on the Open Web, UX and Peer Review
This post follows on from last week’s, that served as sort of a kick-off for my own personal conference season.
Last week, I attended the ALPSP international conference in London. Congratulations first and foremost to Audrey McCulloch and the ALPSP team for putting on a fantastic series of talks and panels. Not least of which were the duelling keynote talks from Anurag Acharya, co-founder of Google Scholar and Kuansan Wang, Director, Internet Service Research Center, Microsoft Research. The pair, speaking in equivalent keynote slots on the first and second days of the conference, respectively outlined very different views of academic discovery on the open web.
Unsurprisingly, both speakers were asked about the degree to which they track individual user behaviour and how that information is used – interestingly they both explained very different scenarios. Perhaps a little evasively, Acharya didn’t really confirm or deny whether Google is actually tracking user searches at an individual level, but he did say that the information isn’t used to personalize future searches. Citing the difference between “general” search for say a local business, and the geographically global nature of “academic” search, Acharya suggested that personalizing Google scholar wouldn’t yield much additional value. Conversely, Wang described a very different philosophy of highly monitored, highly personalized search, through Bing and Cortana, that would adapt to individual users needs.
During the Q&As and the two respective post-keynote coffee breaks, it seemed to me like most of the audience agreed with Wang’s perspective. For example, identifying whether a researcher is searching within their own field or outside of it, or when certain keywords have different meanings for different fields (e.g. plasma), knowing a bit about the searcher might prove useful in providing the right type of results. On the other hand, it would be remiss of me not to mention the privacy concerns that people have about this type of data gathering and privacy. I wrote a post on Scholarly Kitchen exploring that idea some time ago.
Another favourite session of mine was the first plenary on understanding the needs of researchers by studying them. I have a lot of enthusiasm for this approach. Publishers’ main clients have traditionally been libraries. Recently, however, there has been a shift in the publishing industry towards acknowledging researchers themselves as the ultimate customer. Lettie Conrad of Sage gave a fantastic account of the work that they have been doing applying User Experience (UX) techniques such as usability testing to the search and discovery process of a variety of researchers. A white paper on the subject is here. It turns out that when you sit and watch a researcher try to find something, they behave in ways that publishers and librarians don’t expect. It becomes pretty clear that the workflows we’ve designed as publishers and librarians aren’t really working for many users. Among the observations that Conrad discussed, the most worrying was the tendency for researchers to copy and paste citations out of PDFs and into Google. Conrad also reported that search generally starts on the open web with researchers moving to library-based discovery tools as a way to authenticate, once they know what they want to download.
Conrad’s talk segwayed well into the presentation by Deidre Costello of EBSCO Information Services. As part of her talk, Costello showed a highlight reel from a series of video interviews conducted with undergraduates getting their first taste of research. Some of the responses from students were enlightening and sometimes darkly amusing. A general sense of fear and loathing of literature was apparent, as well as a worrying lack of understanding of the purpose of librarians.
One intriguing observation that Costello made was the difference you see when you ask a student how they find content and how they actually do it. She noted that few say that they start with Google, while most of them actually do. A possible explanation being that Google is so firmly embedded in young researcher’s routines that they don’t even think about the fact that they use it. You wouldn’t expect somebody to tell you that they opened an internet browser, would you? It’s an obvious, assumed step.
The last session I’d like to mention was from the final day, titled “Peer review: evolution, experiment and debate.” The panel included fascinating presentations and turned into a great discussion during the Q+A section. Aileen Fyfe’s potted history of peer-review really illustrated how much the concept of peer review has evolved over the years, from an essentially editorial check for things like sedition and blasphemy, to something that as of fairly recently, is expected to be able to detect scientific truth itself. One reoccurring theme that emerged from the discussions: the fact too much is currently being asked of the peer review process. With the mantra of “publish or perish” being truer now than it’s ever been, it can be argued that publishers find themselves unwittingly in the position of administering the process that decides whose career advances and whose doesn’t.
There was certainly plenty to think about at the ALPSP conference. I’m looking forward to thinking and talking about some of these ideas a bit more. The concept of personalization of academic search poses many unanswered questions about the role of search and how it shapes the research process, not to mention privacy concerns. I’m also really glad to see people doing such great work around user experience. On the other side of the coin, the overlap between research assessment, academic advancement, quality control and peer review is a big and complex topic. No doubt we’ll be hearing more about that.
Finally, congratulations to Kudos, who won the ALPSP award for innovation. JStor Daily and to our portfolio company Overleaf, who were both highly commended.