“No Need to Choose: Robust Bayesian Meta-Analysis With Competing Publication Bias Adjustment Methods”, František Bartoš, Maximilian Maier, Eric-Jan Wagenmakers, Hristos Doucouliagos, T. D. Stanley2021-06-17 (, , ; similar)⁠:

Publication bias is an ubiquitous threat to the validity of meta-analysis and the accumulation of scientific evidence. In order to estimate and counteract the impact of publication bias, multiple methods have been developed; however, recent simulation studies have shown the methods’ performance to depend on the true data generating process—no method consistently outperforms the others across a wide range of conditions.

To avoid the condition-dependent, all-or-none choice between competing methods we extend robust Bayesian meta-analysis and model-average across 2 prominent approaches of adjusting for publication bias: (1) selection models of p-values and (2) models of the relationship between effect-sizes and their standard errors. The resulting estimator weights the models with the support they receive from the existing research record.

Applications, simulations, and comparisons to preregistered, multi-lab replications demonstrate the benefits of Bayesian model-averaging of competing publication bias adjustment methods.

[Keywords: Bayesian model-averaging, meta-analysis, PET-PEESE, publication bias, selection models]