“Brains and Algorithms Partially Converge in Natural Language Processing”, 2022-02-16 ():
[previously: et al 2021/2021; Twitter] Deep learning algorithms trained to predict masked words from large amount of text have recently been shown to generate activations similar to those of the human brain. However, what drives this similarity remains currently unknown.
Here, we systematically compare a variety of deep language models to identify the computational principles that lead them to generate brain-like representations of sentences. Specifically, we analyze the brain responses to 400 isolated sentences in a large cohort of 102 subjects, each recorded for 2 hours with functional magnetic resonance imaging (fMRI) and magnetoencephalography (MEG). We then test where and when each of these algorithms maps onto the brain responses. Finally, we estimate how the architecture, training, and performance of these models independently account for the generation of brain-like representations.
Our analyses reveal 2 main findings. First, the similarity between the algorithms and the brain primarily depends on their ability to predict words from context. Second, this similarity reveals the rise and maintenance of perceptual, lexical, and compositional representations within each cortical region.
Overall, this study shows that modern language algorithms partially converge towards brain-like solutions, and thus delineates a promising path to unravel the foundations of natural language processing.
See Also:
“Long-range and hierarchical language predictions in brains and algorithms”
“Thinking ahead: spontaneous prediction in context as a keystone of language in humans and machines”
“A massive 7T fMRI dataset to bridge cognitive and computational neuroscience”
“Inducing brain-relevant bias in natural language processing models”
“Mapping Between fMRI Responses to Movies and their Natural Language Annotations”
“Text2Brain: Synthesis of Brain Activation Maps from Free-form Text Query”