âAutonomous Data Selection With Language Models for Mathematical Textsâ, 2024-02-12 ()â :
To improve language modelsâ proficiency in mathematical reasoning via continual pretraining, we introduce a novel strategy that leverages base language models for autonomous data selection.
Departing from conventional supervised fine-tuning or trained classifiers with human-annotated data, our approach Autonomous Data Selection (AutoDS) uses meta-prompted language models as zero-shot verifiers to evaluate and select high-quality mathematical content autonomously.
To demonstrate the efficacy of our method, we continuously pretrained a 7B-parameter language model on our curated dataset, achieving substantial improvements in downstream performance on the MATH, GSM8K, and BIG-Bench Hard (BBH) tasks with a token amount reduced by orders of magnitude compared to previous continual pretraining works.
Our method showcases a 2Ă increase in pretraining token efficiency compared to state-of-the-art baselines, underscoring the potential of our approach in enhancing modelsâ mathematical reasoning capabilities.
The AutoMathText dataset is available at https://huggingface.co/datasets/math-ai/AutoMathText. The code is available at Github.