“Statistically Controlling for Confounding Constructs Is Harder Than You Think”, Jacob Westfall, Tal Yarkoni2016-03-17 (; backlinks; similar)⁠:

Social scientists often seek to demonstrate that a construct has incremental validity over and above other related constructs. However, these claims are typically supported by measurement-level models that fail to consider the effects of measurement (un)reliability.

We use intuitive examples, Monte Carlo simulations, and a novel analytical framework to demonstrate that common strategies for establishing incremental construct validity using multiple regression analysis exhibit extremely high false positive error rates under parameter regimes common in many psychological domains.

Counterintuitively, we find that error rates are highest—in some cases approaching 100%—when sample sizes are large and reliability is moderate. Our findings suggest that a potentially large proportion of incremental validity claims [such as mediation analysis] made in the literature are spurious.

We present a web application (https://jakewestfall.org/) that readers can use to explore the statistical properties of these and other incremental validity arguments.

We conclude by reviewing SEM-based statistical approaches that appropriately control the false positive error rate when attempting to establish incremental validity.