“Revisiting Meta-Analytic Estimates of Validity in Personnel Selection: Addressing Systematic Overcorrection for Restriction of Range”, Paul R. Sackett, Charlene Zhang, Christopher M. Berry, Filip Lievens2021-12-30 (, , ; backlinks; similar)⁠:

This paper systematically revisits prior meta-analytic conclusions about the criterion-related validity of personnel selection procedures, and particularly the effect of range restriction corrections on those validity estimates. Corrections for range restriction in meta-analyses of predictor-criterion relationships in personnel selection contexts typically involve the use of an artifact distribution.

After outlining and critiquing 5 approaches that have commonly been used to create and apply range restriction artifact distributions, we conclude that each has large issues that often result in substantial over-correction and that therefore the validity of many selection procedures for predicting job performance has been substantially overestimated.

Revisiting prior meta-analytic conclusions produces revised validity estimates. Key findings are that most of the same selection procedures that ranked high in prior summaries remain high in rank, but with mean validity estimates reduced by 0.10–0.20 points. Structured interviews emerged as the top-ranked selection procedure. We also pair validity estimates with information about mean Black-White subgroup differences per selection procedure, providing information about validity-diversity tradeoffs.

We conclude that our selection procedures remain useful, but selection predictor-criterion relationships are considerably lower than previously thought.

[Keywords: selection procedures, validity, meta-analysis, range restriction, artifact distribution]

…Before reviewing approaches to generating artifact distributions, there is a critical observation we need to make and elaborate, namely, that meta-analyses of selection procedure validity to date have assumed that the artifact distribution applies to all studies used in the meta-analysis. In the context of analyzing intercorrelations among predictors (as opposed to selection method validation, which focuses on predictor-criterion relationships), Sackett et al 2007 and Berry et al 2007 noted that the application of the same correction factor (or artifact distribution correction factor) to all studies can be seriously misguided. Berry et al 2007 focused on the relationship between cognitive ability and employment interviews. Some studies administered the 2 measures to all applicants; in this setting there was no range restriction whatsoever. Others screened initially on ability, and only interviewed a subset; in this case there was direct restriction on ability and indirect restriction on the interview. Others administered both predictors to current employees; in this case there was indirect restriction if the selection method used to select current employees was correlated with the interview, with ability, or with both. detailed additional scenarios beyond these 3, but for our purposes the point is simply that applying a uniform correction across all studies makes no sense. separated the available research studies into subsets based on information about range restriction mechanisms in each subset, and applied appropriate corrections within each subset. Conceptually, one could apply appropriate corrections to subsets, and combine the subsets for an estimate of the parameter of interest (eg. mean operational validity).