“A 680,000-Person Megastudy of Nudges to Encourage Vaccination in Pharmacies”, Katherine L. Milkman, Linnea Gandhi, Mitesh S. Patel, Heather N. Graci, Dena M. Gromet, Hung Ho, Joseph S. Kay, Timothy W. Lee, Jake Rothschild, Jonathan E. Bogard, Ilana Brody, Christopher F. Chabris, Edward Chang, Gretchen B. Chapman, Jennifer E. Dannals, Noah J. Goldstein, Amir Goren, Hal Hershfield, Alex Hirsch, Jillian Hmurovic, Samantha Horn, Dean S. Karlan, Ariella S. Kristal, Cait Lamberton, Michelle N. Meyer, Allison H. Oakes, Maurice E. Schweitzer, Maheen Shermohammed, Joachim Talloen, Caleb Warren, Ashley Whillans, Kuldeep N. Yadav, Julian J. Zlatev, Ron Berman, Chalanda N. Evans, Rahul Ladhania, Jens Ludwig, Nina Mazar, Sendhil Mullainathan, Christopher K. Snider, Jann Spiess, Eli Tsukayama, Lyle Ungar, Christophe Van den Bulte, Kevin G. Volpp, Angela L. Duckworth2022-02-08 (sociology of tech, scientific bias, forecasting; backlinks):
We tested 22 different text reminders using a variety of different behavioral science principles to nudgeflu vaccination. Reminder texts increased vaccination rates by an average of 2.0 percentage points (6.8%) over a business-as-usual control condition. The most-effective messages reminded patients that a flu shot was waiting for them and delivered reminders on multiple days. The top-performing intervention included 2 texts 3d apart and stated that a vaccine was “waiting for you.”
[Scientist but not laymen on Prolific] Forecasters failed to anticipate that this would be the best-performing treatment, underscoring the value of testing.
Encouraging vaccination is a pressing policy problem. To assess whether text-based reminders can encourage pharmacy vaccination and what kinds of messages work best, we conducted a megastudy.
We randomly assigned 689,693 Walmart pharmacy patients to receive one of 22 different text reminders using a variety of different behavioral science principles to nudge flu vaccination or to a business-as-usual control condition that received no messages.
We found that the reminder texts that we tested increased pharmacy vaccination rates by an average of 2.0 percentage points, or 6.8%, over a 3-mo follow-up period. The most-effective messages reminded patients that a flu shot was waiting for them and delivered reminders on multiple days. The top-performing intervention included 2 texts delivered 3d apart and communicated to patients that a vaccine was “waiting for you”.
Neither experts [r = 0.03] nor lay people [r = 0.60] anticipated that this would be the best-performing treatment, underscoring the value of simultaneously testing many different nudges in a highly powered megastudy.
[Keywords: vaccination, COVID-19, nudge, influenza, field experiment]
…To assess how well the relative success of these messages could be forecasted ex ante, both the scientists who developed the texts and a separate sample of lay survey respondents predicted the impact of different interventions on flu vaccination rates…Prediction Study Method: To assess the ex ante predictability of this megastudy’s results, we collected forecasts of different interventions’ efficacy from 2 populations. First, in November 2020, we invited each of the scientists who designed one or more interventions in our megastudy to estimate the vaccination rates among patients in all 22 intervention conditions as well as among patients in the business-as-usual control condition. 24 scientists participated (89% of those asked), including at least one representative from each design team, and these scientists made a total of 528 forecasts. In January 2021, we also recruited 406 survey respondents from Prolific to predict the vaccination rates among patients in 6 different intervention conditions (independently selected randomly from the 22 interventions for each forecaster) as well as among patients in the business-as-usual control, which generated a total of 2,842 predictions. Participants from both populations were shown a realistic rendering of the messages sent in a given intervention and then asked to predict the percentage of people in that condition who would get a flu shot from Walmart pharmacy between September 25, 2020, and October 30, 2020. For more information on recruitment materials, participant demographics, and the prediction survey, refer to SI Appendix.
…Prediction Study Results: The average predictions of scientists did not correlate with observed vaccination rates across our megastudy’s 23 different experimental conditions (n = 23, r = 0.03, and p = 0.880).
Prolific raters, in contrast, on average accurately predicted relative vaccination rates across our megastudy’s conditions (n = 23, r = 0.60, and p = 0.003)—a marginally statistically-significant difference (Dunn and Clark’s z-test: p = 0.048; Steiger’s z-test: p = 0.051; and Meng et al’s z-test: p = 0.055) (25⇓–27).
Further, the median scientist prediction of the average lift in vaccinations across our interventions was 6.2%, while the median Prolific respondent guess was 8.3%—remarkably close to the observed average of 8.9%. Notably, neither population correctly guessed the top-performing intervention. In fact, scientists’ predictions placed it 15th out of 22, while Prolific raters’ predictions placed it 16th out of 22 (SI Appendix, Table S18).
Figure S3: By condition, the actual vaccination rate versus the 95% confidence interval predictions by scientists (24 scientists making a total of 552 predictions, Panel A) and lay predictors (406 individuals making a total of 2,842 predictions, Panel B).