…The evidence in support of this practice [teachers advising individual students in college] includes a meta-analysis showing that 5 programs using this approach increased post secondary graduation rates by 0.29 standard deviations—around 13 percentage points—on average.
We think this is probably an overestimate of the true effect of this practice on graduation rates. The 5 studies that collected data on graduation came from a pool of 9 studies that estimated effects on academic progress, a shorter-term outcome than graduation. These 5 studies with graduation data had much larger effects on academic progress than the studies that did not estimate effects on graduation. To demonstrate the issue, this post shows the problem using a meta-analysis of studies conducted by us [MDRC] before showing results for the WWC example.
…funders may prefer to allocate their resources to interventions with a higher likelihood of producing positive longer-term effects. Finally, authors or journal editors may decide not to publish studies with small short-term effects that also have small longer-term effects. Whatever the reason, Watts, Bailey, and Chen note that selecting studies for longer-term follow-up data collection based on short-term effects may provide an overly optimistic picture of the longer-term efficacy of a class of interventions [ie. a publication bias].
We refer to this phenomenon—where longer-term follow-up data are available for a selected sample of studies among all studies of a class of interventions—as follow-up selection bias. We expect it will typically lead meta-analyses to overestimate longer-term effects for the range of interventions under consideration [by hiding fadeout].
An Example with Some Great Data: We first present results from a meta-analytic data set of 30 randomized controlled trials (RCTs) of 39 post secondary interventions, including over 65,000 students.3 This data set, known as The Higher Education Randomized Controlled Trials or THE-RCT, is available to researchers through the Inter-university Consortium for Political and Social Research (ICPSR). All 30 RCTs were conducted by MDRC.
…Figure 1 shows the distribution of effects on a short-term outcome for the studies that did not measure graduation for all sample members (labeled “No”) and the 12 that did (labeled “Yes”). The short-term outcome is the percentage of students who were enrolled in their second semester after random assignment. As can be seen, all the “Yes” studies have short-term estimated effects that are positive. In contrast, many of the “No” studies have short-term estimated effects that are null or negative. The weighted average estimated effect on second-semester enrollment is 4.4 percentage points for “Yes” studies but only 1.2 percentage points for “No” studies. This pattern could generate follow-up selection bias in a meta-analysis of longer-term intervention effects.
Figure 1: Short-term estimated effects from the THE-RCT sample.
Returning to the ‘What Works Clearinghouse Case’: Below Figure 2 shows estimated effects on a short-run student outcome—progressing in college—for studies that did and did not estimate effects on graduation rates. As was the case with studies from THE-RCT, studies that did measure effects on graduation rates found larger effects on progressing in college than studies that did not. The weighted average effects are 0.44 and 0.14 (in effect size units), respectively. Again, this pattern is likely to result in follow-up selection bias in meta-analytic estimates.
Figure 2: Short-term effects for the WWC sample
[See followup (part 2) attempting to statistically correct for this bias]