“The Human Black-Box: The Illusion of Understanding Human Better Than Algorithmic Decision-Making”, 2022-02-10 ():
As algorithms increasingly replace human decision-makers, concerns have been voiced about the black-box nature of algorithmic decision-making. These concerns raise an apparent paradox. In many cases, human decision-makers are just as much of a black-box as the algorithms that are meant to replace them. Yet, the inscrutability of human decision-making seems to raise fewer concerns.
We suggest that one of the reasons for this paradox is that people foster an illusion of understanding human better than algorithmic decision-making, when in fact, both are black-boxes. We further propose that this occurs, at least in part, because people project their own intuitive understanding of a decision-making process more onto other humans than onto algorithms, and as a result, believe that they understand human better than algorithmic decision-making, when in fact, this is merely an illusion.
[Keywords: understanding, projection, illusion of explanatory depth, algorithms, algorithm aversion]
…Drawing on this literature, we propose that because people are more similar to other humans than to algorithms (Epley et al 200717ya; Gray et al 200717ya; 2006), they are more likely to rely on their own understanding of a decision-making process to intuit how other humans, versus algorithms, make decisions. The privileged—yet often misguided—view that projection provides into other humans’ minds can foster the illusion of understanding human better than algorithmic decision processes, when in fact, both are black-boxes.
6 experiments test our hypotheses. Experiments 1A–C test whether people foster a stronger illusion of understanding human than algorithmic decision-making across 3 domains. Experiments 2, 3, and 4 (in online supplemental materials E) test whether projection accounts for this phenomenon in each domain. Experiment 4 also tests how illusory understanding affects trust in human versus algorithmic decisions. New York University and Winthrop University Institutional Review Board (IRB) approved the experimental protocols. In all experiments, the sample size was predetermined, and a sensitivity power analysis (Faul et al 200915ya) indicated that small-to-medium size effects could be detected with a power of 0.80. We report all conditions, manipulations, measures, and data exclusions. Questions to screen for bots and avoid differential dropout were included at the beginning of each experiment (see online supplemental materials B).