“Truncating Bar Graphs Persistently Misleads Viewers”, Brenda W. Yang, Camila Vargas Restrepo, Matthew L. Stanley, Elizabeth J. Marsh2021-06-01 (, ; similar)⁠:

Data visualizations and graphs are increasingly common in both scientific and mass media settings. While graphs are useful tools for communicating patterns in data, they also have the potential to mislead viewers.

In 5 studies, we provide empirical evidence that y-axis truncation [in bar charts] leads viewers to perceive illustrated differences as:

larger (ie. a truncation effect). This effect persisted after viewers were taught about the effects of y-axis truncation and was robust across participants, with 83.5% of participants across these 5 studies showing a truncation effect. We also found that individual differences in graph literacy failed to predict the size of individuals’ truncation effects. PhD students in both quantitative fields and the humanities were susceptible to the truncation effect, but quantitative PhD students were slightly more resistant when no warning about truncated axes was provided.

We discuss the implications of these results for the underlying mechanisms and make practical recommendations for training critical consumers and creators of graphs.

[Keywords: data visualization, misleading graphs, misinformation, axis truncation, bar graphs]


News media, opinion pieces, social media, and scientific publications are full of graphs meant to communicate and persuade. Such graphs may be technically accurate in displaying correct numerical values and yet misleading because they lead people to draw inappropriate conclusions. In 5 studies, we investigate the practice of truncating the y-axis of bar graphs to start at a non-zero value. While this has been called one of “the worst of crimes in data visualization” by The Economist, it is surprisingly common in not just news and social media, but also in scientific conferences and publications. This might be because the injunction to “not truncate the axis!” may be seen as more dogmatic than data-driven.

We examine how truncated graphs consistently lead people to perceive a larger difference between 2 quantities in 5 studies, and we find that 83.5% of participants across studies show a truncation effect. In other words, 83.5% of people in our studies judged differences illustrated by truncated bar graphs as larger than differences illustrated by graphs where the y-axis starts at 0.

Surprisingly, we found that the truncation effect was very persistent. People were misled by y-axis truncation even when we thoroughly explained the technique right before they rated graphs, although this warning reduced the degree to which people were misled. People with extensive experience working with data and statistics (ie. PhD students in quantitative fields) were also susceptible to the truncation effect. Overall, our work shows the consequences of truncating bar graphs and the extent to which interventions, such as warning people, can help but are limited in their scope


Study 1 established a paradigm for examining the effects of y-axis truncation within bar graphs, providing a useful paradigm for studying deceptive graphs. In Study 2, we investigate whether providing an explicit explanation of y-axis truncation would reduce or eliminate the truncation effect…We expected that explicit warnings about y-axis truncation would give participants the information needed to identify truncated graphs and adjust their judgments accordingly.

Figure 3: Raincloud plots for Study 1 (a) and Study 2 (b). These raincloud plots (Allen et al 2018) depict average participant ratings for truncated and control graphs, respectively. Error bars reflect correlation-adjusted and difference-adjusted 95% confidence intervals of the means (Cousineau2017). The truncated versus control graphs variable was manipulated within-subjects; each participant is represented by 2 points. In this Study 2, all participants received an explanatory warning.
Figure 3: Raincloud plots for Study 1 (a) and Study 2 (b).
These ‘raincloud plots’ (Allen et al 2018) depict average participant ratings for truncated and control graphs, respectively. Error bars reflect correlation-adjusted and difference-adjusted 95% CIs of the means (Cousineau2017). The truncated versus control graphs variable was manipulated within-subjects; each participant is represented by 2 points. In this Study 2, all participants received an explanatory warning.

…The results of Study 2 were surprising in that an explicit warning did not eliminate the truncation effect. To further investigate this, in Study 3, we directly manipulate in a single experiment whether participants are given an explanatory warning about y-axis truncation. Doing so allows us to directly compare the effects of having an explanatory warning or not on the truncation effect.

Figure 4: Raincloud plot for Study 3. Error bars reflect correlation-adjusted and difference-adjusted 95% confidence intervals of the means and points represent each participant twice. The truncated versus control graphs variable was manipulated within-subject.
Figure 4: Raincloud plot for Study 3.
Error bars reflect correlation-adjusted and difference-adjusted 95% CIs of the means and points represent each participant twice. The truncated versus control graphs variable was manipulated within-subject.

Study 4: In Study 3, we found that providing an explanatory warning before participants rated control and truncated bar graphs reduced but did not eliminate the truncation effect. In the world, however, an explicit warning will rarely immediately precede graphs with truncated vertical axes. Here, we extend the findings of Study 3 by asking participants to provide judgments about a new set of bar graphs after a 1-day delay. The purpose in doing so was to examine whether the effects of the explicit warning on the first day will extend to the next day.

Figure 5: Raincloud Plot for Study 4. Error bars reflect correlation-adjusted and difference-adjusted 95% confidence intervals of the means. Points represent each participant twice. The session (1 and 2) and graph type (truncated and control) variables were within-subject manipulations. Cohen’s <em>d</em> and 95% CIs from left to right: No Warning1 = 0.69 [0.37, 1.02], No Warning2 = 0.62 [0.30, 0.94]; Warning1 = 0.42 [0.10, 0.74], Warning2 = 0.40 [0.08, 0.72]
Figure 5: Raincloud Plot for Study 4.
Error bars reflect correlation-adjusted and difference-adjusted 95% CIs of the means. Points represent each participant twice. The session (1 and 2) and graph type (truncated and control) variables were within-subject manipulations.
Cohen’s d and 95% CIs from left to right: No Warning1 = 0.69 [0.37, 1.02], No Warning2 = 0.62 [0.30, 0.94]; Warning1 = 0.42 [0.10, 0.74], Warning2 = 0.40 [0.08, 0.72]

…In Study 5, we examined the size of the truncation effect in two doctoral student populations: PhD students pursuing quantitative fields versus the humanities.

Figure 6: Raincloud plot for Study 5. Error bars reflect correlation-adjusted and difference-adjusted 95% confidence intervals of the means. Points represent each participant twice. The truncated versus control graphs variable was manipulated within-subjects.
Figure 6: Raincloud plot for Study 5.
Error bars reflect correlation-adjusted and difference-adjusted 95% CIs of the means. Points represent each participant twice. The truncated versus control graphs variable was manipulated within-subjects.