“Empirical Audit and Review and an Assessment of Evidentiary Value in Research on the Psychological Consequences of Scarcity”, Michael O’Donnell, Amelia S. Dev, Stephen Antonoplis, Stephen M. Baum, Arianna H. Benedetti, N. Derek Brown, Belinda Carrillo, Andrew L. Choi, Paul Connor, Kristin Donnelly, Monica E. Ellwood-Lowe, Ruthe Foushee, Rachel Jansen, Shoshana N. Jarvis, Ryan Lundell-Creagh, Joseph M. Ocampo, Gold N. Okafor, Zahra Rahmani Azad, Michael Rosenblum, Derek Schatz, Daniel H. Stein, Yilu Wang, Don A. Moore, Leif D. Nelson2021-11-02 (, , ; backlinks; similar)⁠:

Empirical audit and review is an approach to assessing the evidentiary value of a research area. It involves identifying a topic and selecting a cross-section of studies for replication. We apply the method to research on the psychological consequences of scarcity. Starting with the papers citing a seminal publication in the field, we conducted replications of 20 studies that evaluate the role of scarcity priming in pain sensitivity, resource allocation, materialism, and many other domains. There was considerable variability in the replicability, with some strong successes and other undeniable failures. Empirical audit and review does not attempt to assign an overall replication rate for a heterogeneous field, but rather facilitates researchers seeking to incorporate strength of evidence as they refine theories and plan new investigations in the research area. This method allows for an integration of qualitative and quantitative approaches to review and enables the growth of a cumulative science.

[Keywords: scarcity, reproducibility, open science, meta-analysis, evidentiary value]

…We selected 20 studies for replication. We built a set of eligible papers and then drew from that set at random. The set included studies that (1) cited Shah et al 2012 seminal paper on scarcity, (2) included scarcity as a factor in their design, and (3) could be replicated with an online sample. We did not decide on an operational definition of scarcity, but we accepted all measures and manipulations of scarcity that were proposed by the original authors…To give us sufficient precision to comment on the statistical power of the original effects, our replications employed 2.5× the sample size of the original paper (8). Because this approach would also allow us to detect smaller effects than in the original studies, it would have allowed us to detect statistically-significant effects even in the cases where the original findings were not statistically-significant.

Figure 1: The Leftmost columns indicate common features among the replicated studies and the Middle column depicts effect size (correlation coefficients) for the original and replication studies. Effect sizes are bounded by 95% CIs. The Right columns indicate the estimated power in the original studies (third column from Right), the upper bound of the 95% CI for estimated power in the original (second column from Right), and well as an estimated sample size required for 80% power, based on the replication effect (Rightmost column).

Figure 1 shows our results. The Leftmost columns categorize commonalities among the 20 studies. In the 6 studies featuring writing independent variables we reviewed the responses for nonsensical or careless responses and excluded them. Results including these responses are in SI Appendix. The Middle column shows that replication effect sizes were smaller than the original effect sizes for 80% of the 20 studies, and directionally opposite for 30% of these 20 studies. Of the 20 studies that were statistically-significant in the original, 4 of our replication efforts yielded statistically-significant results. But statistical-significance is only one way to evaluate the results of a replication. The 3 Rightmost columns report estimates of the power in the original studies based on the replication effects. This analysis provides the upper bounds of the 95% CI for the estimated power of the original studies. Only 9 of the original studies included 33% power in these 95% CIs, indicating that most of the 20 effects we attempted to replicate were too small to be detectably studied in the original investigations.

…Scarcity is a real and enduring societal problem, yet our results suggest that behavioral scientists have not fully identified the underlying psychology. Although this project has neither the goal nor the capacity to “accept the null” hypothesis for any of these tests, the replications of these 20 studies indicate that within this set, scarcity primes have a minimal influence on cognitive ability, product attitudes, or well being.