Mobile Menu

A negative result is positive for science

The first thing you learn as a scientist is that most of your experiments will fail. Despite this, and the fact that rejection of hypotheses drives the progress of science, most of these investigations do not get documented. 

Studies with positive results are more represented in the literature, which has created a publication bias. While most researchers enter the field to advance knowledge and improve the world, the reality is that the conduct of the scientific community is not always transparent. The pressure to obtain positive results for career progression and recognition has created a damaging narrative surrounding negative results that is hindering the community.

What if there was a different way? What if a negative result was seen to be just as important as a positive one, if it also helps to drive scientific knowledge forward?

In this blog, we explore what it means to obtain a negative result and the value of reporting such results to allow scientific advancements to be fully understood.  

What is a negative result?

Experiments begin by setting hypotheses. These are the initial building blocks in the scientific method. A hypothesis is a prediction or explanation that can then be tested by an experiment. Researchers use these to observe a number of things: What will happen? What might they discover? How could it be different to what is currently accepted? Or how can it lead to the discovery of something new?

Unless scientists were also psychics, then it is normal for negative or null results to be common. There are three common reasons why researchers obtain such results:

  • The original hypothesis was inaccurate and based upon false and incorrect assumptions.
  • Technical errors e.g., use of inappropriate statistical methods.
  • The researchers were unable to replicate findings obtained from earlier published reports.

The first two reasons do not qualify for publication as scientists tend to prematurely terminate these experiments.

The negative narrative

There are growing concerns that the increasing competition for funding and citations has distorted science. A 2012 study showed that the frequency at which papers testing a hypothesis returned a positive conclusion increased by more than 22% from 1990 to 2007. In 2007, more than 85% of published studies claimed to have produced positive results. The study’s author, Daniele Fanelli, concluded that scientific objectivity in published papers was declining.

Negative findings can be frustrating and often demotivate people, particularly young scientists who are trying to pave out their careers. In addition, projects are costly and if one doesn’t deliver promising results, they are often abandoned. Some argue that reporting negative results is a waste of resources because the input costs may be higher compared to the final output i.e., impact factor of the journal. For example, reviewers may ask investigators to complete multiple additional controls to make sure that negative results are not just the effect of technical mistakes.

An important factor for investigators is finding an editor who wants to publish the data. Most editors want exciting stuff – new mechanisms, unexpected findings, etc – that will increase citations, social shares and press coverage. Unfortunately, most negative results are boring. Because of this, researchers find it difficult to publish negative results in an appropriate journal and end up in a journal with a much lower impact factor.

Many supervisors are also concerned that publishing results could ruin the careers of their students as well as their own. Spending a lot of resources on the wrong project and publishing with a low impact factor can result in principal investigators/groups receiving less funding in the future. For young scientists, they may also feel that a negative study will dramatically reduce their chances of getting jobs. Most supervisors aim for journals with a high impact factor, such as Nature or Science, and will have the tendency to abandon projects without exciting results. This attitude is probably one of the biggest problems in science.

Implications of not publishing

There are three key elements that define any research output – reproducibility, robustness and translatability. Published reports are key to enable investigators worldwide to build their hypotheses. As such, the reporting of both positive and negative results is essential to ensure the scientific process is robust and credible. Negative or null results help researchers modify their research plans.

In the 19th century, physicists Albert Michelson and Edward Morley conducted a series of experiments to study the expected motion of the Earth relative to the aether. Their experiment (Michelson-Morley experiment) resulted in a null result. A few years later, Albert Einstein developed his theory of special relativity, in part due to the lack of evidence of the luminiferous aether existence as published by Michelson and Morley. This contributed to Michelson being awarded the 1907 Nobel Prize in Physics and the Michelson-Morley experiment being considered one of the most famous non-confirmatory experiments.

The bias of positive results has a clear impact on scientific communication. It disregards contributions to science that do not fit in with the ‘positive’ model of science. For many young scientists, who only hear stories of scientific success, it can exacerbate imposter syndrome when their own work doesn’t match these expectations. The pressure to publish positive results in high impact journals can also lead scientists to commit fraud and manipulate data, particularly if there is pressure from their supervisors. In some cases, scientists could hype up their research and foster unrealistic expectations, which in turn can result in distrust when translation into clinical care comes at a much slower pace.

Some argue that not publishing negative results is unethical. When negative results are not made publicly available, other groups end up repeating the same or similar experiments. This is not only a waste of time, effort and money, but it prevents people from learning and trying different avenues that could progress the field further.

In 2016, a survey reported that 70% of researchers have tried and failed to reproduce another scientist’s experiments. In addition, an eight-year study that recently concluded, found that dozens of major cancer research papers could not be replicated, and those that were reproduced, effect sizes were, on average, 85% lower than originally reported. One of the reasons for this so-called reproducibility crisis was positive-results bias. This describes when authors are more likely to submit, or editors to accept, positive results than negative results. Publishing only positive results can distort our view of reality and our knowledge.

Sharing negative results

Firstly, scientists need to acknowledge the fact that all work is important – regardless of the outcome.

However, simply stating for people to publish their negative results is easier said than done. To publish negative results, the data must be proved to be statistically negative, and repeated to ensure there were no technical faults. In addition, complete access to methodologies and raw data must be provided.

Nonetheless, publishing negative data can lead to greater information sharing and greater results achieved, as people can learn from the mistakes of others. There are currently several platforms where researchers can post their findings, such as open forums or databases. However, publishing within peer-reviewed, widely accepted journals is still the more preferred option.

In recent years, there has been an optimistic change in journals’ consideration of publishing negative results. Below, are just some examples of distinguished journals that facilitate publishing such results:

While some scientists are on board with this change, there is still an underlying stigma surrounding negative results. To overcome this, reviewers and publishers must be committed to sharing such results in their journals. Funding agencies need to continue to support scientists who produce important negative results. And academic conferences need to do more to encourage honest discussions about how we can learn from failed experiments.

Another fundamental problem is how the practice of scientific publishing influences the practice of science. The fact I can’t get access to loads of papers to do my science communication job proves my point. But even before that, researchers themselves have to pay to apply to publish their own amazing discoveries. The system is unfair from the start. The people reviewing the work are also sometimes known to the people applying to get it published. In addition, most of the people who review papers are scientists themselves, meaning there is very little time to actually reproduce the experiments during the review process. To solve this problem, open-source science is key. We must review the existing publishing system and allow people to directly comment and challenge this freely available work.

But, above all, honesty within science is vital. As we have seen with the COVID-19 pandemic, scientists can have a big impact on governmental decisions and how the public act. If deception stems from people in such high power, then trust towards the science community will fracture and, ultimately, hinder significant progress.


  • Mehta D. Highlight negative results to improve science. Nature. 2019 Oct 4.
  • Mlinarić A, Horvat M, Šupak Smolčić V. Dealing with the positive publication bias: Why you should really publish your negative results. Biochemia medica. 2017 Oct 15;27(3):447-52.
  • Enago Academy. Experts’ Take: Why It Is Important to Publish Negative Results. 2018.
  • Sayao LF, Sales LF, FELIPE CB. Invisible science: publication of negative research results. Transinformação. 2021 Feb 8;33.
  • Hendrix S. Should I Publish Negative Results Or Does This Ruin My Career In Science? Smart Science Career.
  • Murudkar S. Top 10 Journals to Publish Your Negative Results. Enago Academy. 2021.

More on these topics

Research / Scientific Integrity