āFunctional Benchmarks for Robust Evaluation of Reasoning Performance, and the Reasoning Gapā, 2024-02-29 ()ā :
We propose a framework for robust evaluation of reasoning capabilities of language models, using functional variants of benchmarks.
Models that solve a reasoning test should exhibit no difference in performance over the static version of a problem compared to a snapshot of the functional variant. We have rewritten the relevant fragment of the MATH benchmark into its functional variant MATH(), with functionalization of other benchmarks to follow.
When evaluating current state-of-the-art models over snapshots of MATH(), we find a reasoning gapāthe percentage difference between the static and functional accuracies. We find reasoning gaps 35.9%ā59.4% among the state-of-the-art closed and open weights models that perform well on static benchmarks, with the caveat that the gaps are likely to be smaller with more sophisticated prompting strategies.
Here we show that models which anecdotally have good reasoning performance over real-world tasks, have quantifiable lower gaps, motivating the open problem of building āgap 0ā models.
Code for evaluation and new evaluation datasets, 3 MATH() snapshots, are publicly available at Github.
[Does not appear to have made much effort to control for length/difficulty, which will exaggerate the performance loss; compare GSM1k and see Engstrom on psychometric issues in replication dataset construction.]