“Why Computers Will Never Read (or Write) Literature: A Logical Proof and a Narrative”, 2021 (; backlinks):
In response to Franco Moretti’s project of distant reading and other recent developments in the Digital Humanities, this article offers a proof that computers will never learn to read or write literature.
The proof has 3 main components: (1) computer artificial intelligences (including machine-learning algorithms) run on the CPU’s Arithmetic Logic Unit, which performs all of its computations using symbolic logic; (2) symbolic logic is incapable of causal reasoning; and (3) causal reasoning is required for processing the narrative components of literature, including plot, character, style, and voice. This proof is presented both in logical form and as a narrative.
[Impressively bad. Usually one has to resort to Godel or Turing for such levels of anti-AI crankery.]
The narrative’s beginning traces the origin of automated symbolic-logic literature processors back before modern computing to the 1930s Cambridge scholar I. A. Richards. The middle recounts how these processors became a basis of New Criticism, Cultural Poetics, and other 20th/21st-century theories of literary “interpretation.” And the end explores how those theories’ preference for symbolic logic over causal reasoning leaves them vulnerable to the same blind spot that early modern scientists detected in medieval universities: the inability to explain why literature (or anything else) works—and therefore the inability to comprehend how to use it.
[Keywords: digital humanities, interpretation, scientific method, causal reasoning, neuroscience, machine learning]