Subscribe to our newsletter

Shining a light on conflict of interest statements

5th September 2024
 | Simon Linacre

Authors either have a conflict of interest or not, right? Wrong. Research from Digital Science has uncovered a tangled web of missing statements, errors, and subterfuge, which highlights the need for a more careful appraisal of published research.


At this year’s World Conference on Research Integrity, a team of researchers from Digital Science led by Pritha Sarkar presented a poster with findings from their deep dive on conflict of interest (COI) statements. Entitled Conflict of Interest: A data driven approach to categorisation of COI statements, the initial goal was to look at COI statements with a view to creating a binary model that determines whether a Conflict of Interest statement is present or not in an article. 

However, all was not as it seemed. While some articles had no COI and some had one present, those present covered a number of different areas, which led the team to think COIs might represent a spectrum rather than binary options.

Gold standard

Conflict of interest is a crucial aspect of academic integrity. Properly declaring a COI statement is essential for other researchers to assess any potential bias in scholarly articles. However, those same researchers often encounter COI statements that are either inadequate or misleading in some way even if they are present. 

The Digital Science team – all working on research integrity with Dimensions – soon realized the data could be leveraged further to better explore the richness inherent in the nuanced COI statements. After further research and analysis, it became clear that COI statements could be categorized into six distinct types:

  1. None Declared
  2. Membership or Employment
  3. Funds Received
  4. Shareholder, Stakeholder or Ownership
  5. Personal Relationship
  6. Donation

This analysis involved manually annotating hundreds of COI statements with Natural Language Processing (NLP) tools. The aim was to create a gold standard that could be used to categorize all other COI statements, however despite the team’s diligence a significant challenge persisted in the shape of ‘data skewness’ – which can be defined as an imbalance in the distribution of data within a dataset that can impact data processing and analytics.

Fatal flaw

One irresistible conclusion to the data skewness was a simple one – that authors weren’t truthfully reporting their conflicts of interest. But could this really be true?

The gold standard approach came from manually and expertly annotating COI statements to develop an auto-annotation process. However, despite the algorithm’s ability to auto-annotate 33,812 papers in just 15 minutes, the skewness that had been initially identified persisted, leading to the false reporting theory for authors (see Figure 1 of COI Poster). 

To firm up this hypothesis, when the Retraction Watch database was analyzed, the troubling trend, including the discrepancy between reported COI category and retraction reason, became even more apparent (see Figure 2 of the COI Poster). 

Moreover, when the team continued with the investigation, they found there were 24,289 overlapping papers in Dimensions GBQ and Retraction Watch, and among those papers, 393 were retracted due to conflict of interest. Out of those 393 papers, 134 had a COI statement, however 119 declared there was no conflict to declare.

Conclusion

Underreporting and misreporting conflict of interest statements or types can undermine the integrity of scholarly work. Other research integrity issues around paper mills, plagiarism and predatory journals have already damaged the trust the public has with published research, so further problems with COIs can only worsen the situation. With the evidence of these findings, it is clear that all stakeholders in the research publication process must adopt standard practices on reporting critical trust markers such as COI to uphold the transparency and honesty in scholarly endeavors. 

To finish on a positive note, this research poster was awarded second-place at the 2024 World Conference on Research Integrity, showing that the team’s research has already attracted considerable attention among those who seek to safeguard research integrity and trust in science.

You can find the poster on Figshare: https://doi.org/10.6084/m9.figshare.25901707.v2

Partial data and the code for this project are also available on Figshare.


For more on the topic of research integrity, see details of Digital Science’s Catalyst Grant award for 2024, which focuses on digital solutions around this topic.

Simon Linacre

About the Author

Simon Linacre, Head of Content, Brand & Press | Digital Science

Simon has 20 years’ experience in scholarly communications. He has lectured and published on the topics of bibliometrics, publication ethics and research impact, and has recently authored a book on predatory publishing. Simon is an ALPSP tutor and has also served as a COPE Trustee.