Selectively publishing results that support the tested hypotheses (“positive” results) distorts the available evidence for scientific claims. For the past decade, psychological scientists have been increasingly concerned about the degree of such distortion in their literature. A new publication format has been developed to prevent selective reporting: In Registered Reports (RRs), peer review and the decision to publish take place before results are known.
We compared the results in published RRs (n = 71 as of November 2018) with a random sample of hypothesis-testing studies from the standard literature (n = 152) in psychology.
Analyzing the first hypothesis of each article, we found 96% positive results in standard reports but only 44% positive results in RRs.
We discuss possible explanations for this large difference and suggest that a plausible factor is the reduction of publication bias and/or Type I error inflation in the RR literature.
…The positive result rate we found in SRs (96.05%) is slightly but non-statistically-significantly higher than the 91.5% reported by Fanelli2010. Our replication in a more recent sample of the psychology literature thus yielded a comparably high estimate of supported hypotheses, but we cannot rule out that the positive result rate in the population has increased since 2010 (cf. Fanelli2012). Furthermore, our estimate of the positive result rate for RRs (43.66%) is comparable with the 39.5% reported by Allen & Mehler2019 despite some differences in method and studied population.