Subscribe to our newsletter
Embracing Failure as an Intrinsic Part of Science #Failtales
One of the most powerful things I have learned in the last few years is that failure is totally okay. Failure is normal and natural. And I mean this in all walks of life – it is okay to be wrong. It is okay to make mistakes. As long as we use them to better ourselves and grow as humans. We all fail at one point or another in our lives – the difference is how we use those failures.
The same is true for science. Science is not progressed just by standing on the shoulders of giants – it also relies on understanding that those shoulders are propped up by failure as much as they are by success. Failed experiments, failed analyses, failed hypotheses. All of these things characterise authentic science.
But we have a problem. As a culture, we have created a myth that status can only be granted based on success, usually based around the discovery of cool or interesting ‘positive’ results. At the same time, we say that failure is unacceptable based on the way in which we selectively communicate science. We can do better.
Cherry picking results
Science rarely has its “eureka!” moments. The modern reality is more like “Yes, finally, I got the results I need in order to get published.” What we seldom see is, as we say in England, the making of the sausage. Most of research is a continuous process, a messy melange. The problem is very often, this process goes uncommunicated, secretly stowed away and never to be seen. By and large, we only ever select ‘positive’ results to be communicated.
However, very rarely is this process reflected in the way in which we distribute ‘prestige’ and the way in which we reward science. Because of the primary focus on results, this creates the incentive for results selectivity. This is often incorrectly communicated as the need for some form of gatekeeping in order to maintain the ‘quality’ of the published record. But in reality, what it creates is a distortion in the published research record.
There have been countless pages written about such publication bias. This amounts from what is deemed ‘publishable’ by researchers, funders, and publishers. Sadly, what we know is that what is deemed publishable (or even sellable) is often quite divergent from the true and full results of research. For example, ‘negative’ results are often viewed as unpublishable. Or only results that are perceived to be novel, ‘sexy’, or ‘successful’ are often considered publishable.
But this is exceptionally dangerous. What we have curated as a research community is an artificial record that only advertises the best, successful, elements of science. All of the failure is kept on hard drives, desk draws, or in our minds.
Imagine what impact that has on research duplication. There are people running analyses and experiments right now which others will have undoubtedly done before, but just not communicated their results. People will be writing grant applications to do research that has already been conducted but remains only known by one group because the results were not deemed publishable.
“…we have an incredibly skewed publication record that systematically rejects ‘failure’, or ‘negative results’.
We can see just how prevalent this issue is too. The image below is from a study by Daniele Fanelli in 2010. What it reveals is just how prevalent the selective publication of ‘positive’ results is across research disciplines. You don’t have to be an expert to see that this is pretty much the exact opposite of what it should look like if the published recorded reflected reality. It’s prevalence across research disciplines, albeit to different degrees, suggests that something systemic is affecting the way in which we publish results. It is showing that we have an incredibly skewed publication record that systematically rejects ‘failure’, or ‘negative results’.
What even are ‘negative’ results?
Negative results appears to have a strange double meaning. The first is around obtaining results that do not ‘positively’ support a tested hypothesis (or reject the null hypothesis). The second is about obtaining results that are deemed subjectively negative, as in cannot be published for one reason or another. For example, not fitting a research narrative.
Both of these framings can be very harmful to advancing knowledge. By framing results as positive or negative, we impose a value judgement on them. This leads to selective communication of those results, as see above. Results are results, and all should be communicated. Finding no support for a hypothesis still tells us something about the world, and we do the entire scientific enterprise a disservice when we sweep these under the rug.
What does this have to do with Open Science?
Open Science means a lot of different things depending on who you ask. Usually though, it comes down to two core things. First is helping to make the results of research more accessible, typically by removing financial or re-use barriers. The second is more about being more transparent in the research process itself.
Typically then, Open Science manifests itself in practical things such as data sharing, or creating reproducible environments that allow one to recreate much more of the research process than you could do just with the final article. Registered reports are a powerful component of Open Science, whereby a paper is conditionally accepted prior to any results even being collected. One fundamental idea here is to remove bias in the published record, as well as to expose the process more.
When the results of a study fail to reproduce or replicate (two different things), it does NOT mean that the original study was wrong. This is a flagrant distortion of the issue. https://t.co/gM9iF3LM6E
— Lorena Barba @labarba@fosstodon.org (@LorenaABarba) February 17, 2019
This tweet from Lorena Barba recently resonated with me. She states the fact that a failed replication/reproduction is less to do with being wrong originally, but issues to do with full and transparent reporting. I feel like much of the issues around reproducibility (often called a ‘crisis’) could be resolved if we accepted ‘failure’ as fundamental to research, and allowed more transparency to be injected into the process.
“Results of research are not under our control, but the way we communicate them is an attempt to impose control.”
What these processes help to do is create an inherent cultural shift towards accepting failure. When we select which results to communicate in articles, and which to omit, we are essentially saying failure should not be exposed as part of the research process. Results of research are not under our control, but the way we communicate them is an attempt to impose control.
When we expose all elements, warts and all, for inspection and re-analysis, we are sending the message that yes, the process is messy, but here’s the whole thing. That failure is okay. The methods are something under our control, and thus through embracing failure we impose accountability on ourselves.
“When we reject failure, we create a culture of punishment, artificial rewards, and scientific bias. When we embrace failure, we cultivate a culture of acceptance, tolerance, and learning. Which one would you prefer?”
As an example, a paper we published in 2016, was our exploration of what environmental conditions might have controlled the diversity of animals like dinosaurs and crocodiles over millions of years. For this, we performed hundreds of analyses, looking at whether things like changes in temperature, sea level, and even the rock record influenced animal diversity. Most of these results were ‘negative’ – as in, no correlation. But we still communicated them, every single one, because this tells a more complete story. We saw this as critical to being honest and open about our research, and embracing ‘failure’ as fundamental to the message we wanted to communicate.
When we reject failure, we create a culture of punishment, artificial rewards, and scientific bias. When we embrace failure, we cultivate a culture of acceptance, tolerance, and learning. Which one would you prefer?
Author Bio
Jon completed his PhD at Imperial College London two years ago, where he was a paleontologist studying the evolution of animals like dinosaurs and crocodiles. Now, he is an independent researcher, working on topics such as peer review and open science, and supported by a Shuttleworth Flash Grant. He is the founder of the Open Science MOOC community, the digital publishing platform paleorXiv, and the lead author of the Foundations for Open Scholarship Strategy Development. In his spare time, he campaigns for the democratisation of science and writes kids books about dinosaurs.