âMachine Reading, Fast and Slow: When Do Models âUnderstandâ Language?â, 2022-09-15 ()â :
Two of the most fundamental challenges in Natural Language Understanding (NLU) at present are: (a) how to establish whether deep learning-based models score highly on NLU benchmarks for the ârightâ reasons; and (b) to understand what those reasons would even be.
We investigate the behavior of reading comprehension models with respect to two linguistic âskillsâ: coreference resolution and comparison. We propose a definition for the reasoning steps expected from a system that would be âreading slowlyâ, and compare that with the behavior of 5 models of the BERT family of various sizes, observed through saliency scores and counterfactual explanations.
We find that for comparison (but not coreference) the systems based on larger encoders are more likely to rely on the ârightâ information, but even they struggle with generalization, suggesting that they still learn specific lexical patterns rather than the general principles of comparison.