“Shifting Minds: A Quantitative Reappraisal of Cognitive-Intervention Research”, David Moreau2020-10-06 (, , ; backlinks)⁠:

Recent popular areas of research in psychology suggest that behavioral interventions can have profound effects on our cognitive abilities. In particular, the study of brain training, video gaming, growth mindset, and stereotype threat all include claims that low-cost, noninvasive manipulations of the environment can greatly affect individual performance.

Here, I provide a quantitative reappraisal of this literature, focusing on recent meta-analytic findings.

Specifically, I show that effect-size distributions in the 4 aforementioned areas are best modeled by [a mixture of] multiple rather than single latent distributions, suggesting important discrepancies in the effect-sizes reported. I further demonstrate that these multimodal characteristics are not typical within the broader field of psychology, using 107 meta-analyses published in 3 top-tier journals as a comparison.

The effect-size distributions observed in cognitive-intervention research therefore appear to be uncommon, and their characteristics are largely unexplained by current theoretical frameworks of cognitive improvement. Before the source of these discrepancies is better understood, the current study calls for constructive skepticism in evaluating claims of cognitive improvement after behavioral interventions and for caution when this line of research influences large-scale policies.

[Keywords: environment, cognitive improvements, intelligence, brain plasticity, genetics, meta-analysis, mixture modeling]

What does multimodality mean in the context of cognitive interventions? This study, the first to systematically model and characterize latent distributions of effect sizes in the context of cognitive-intervention research, provides novel information that supports recent advances in our understanding of cognitive malleability (Moreau et al 2019; Sala & Gobet2017). This quantitative reappraisal has a number of implications for our understanding of cognitive improvement via interventions; most importantly, it suggests that even when inferred from well-conducted, comprehensive meta-analyses, claims based on central-tendency measures such as mean effect size can be misleading and may not provide a solid basis for decisions or policies. In the context of intervention research, this is especially problematic, as it typically leads to conclusions that are not representative of expected outcomes. For example, generic claims about small but non-null effects for a given intervention, if based on mixtures of distributions, may convey little information with respect to potential applications. At the very least, this possibility should be factored into the decision process when seeking to implement large-scale interventions.

…Note that moderators do not have to relate to the intervention itself to be of influence—they could be embedded within the scientific process more generally. For example, multiple distributions of effect sizes could arise from well-known problems with current publishing practices, such as publication bias (Franco et al 2014) or perverse incentives (Stephan2012). However, for these issues to be the reason for the multimodality observed in cognitive-intervention research, they would need to exert a specific influence within these research areas that is mostly uncommon in other contexts, given the contrasting pattern observed in the broader field of psychology. Although a possibility—for example, publication bias could be exacerbated in cognitive-intervention research given pressure toward extreme, newsworthy findings that have applied potential—this hypothesis was beyond the current study.