ā€œDon’t Stop Pretraining: Adapt Language Models to Domains and Tasksā€, Suchin Gururangan, Ana Marasović, Swabha Swayamdipta, Kyle Lo, Iz Beltagy, Doug Downey, Noah Smith2020-04-23 (, )⁠:

Language models pretrained on text from a wide variety of sources form the foundation of today’s NLP. In light of the success of these broad-coverage models, we investigate whether it is still helpful to tailor a pretrained model to the domain of a target task.

We present a study across 4 domains (biomedical and computer science publications, news, and reviews) and 8 classification tasks, showing that a second phase of pretraining in-domain (domain-adaptive pretraining) leads to:

performance gains, under both high-resource & low-resource settings. Moreover, adapting to the task’s unlabeled data (task-adaptive pretraining) improves performance even after domain-adaptive pretraining. Finally, we show that adapting to a task corpus augmented using simple data selection strategies is an effective alternative, especially when resources for domain-adaptive pretraining might be unavailable.

Overall, we consistently find that multi-phase adaptive pretraining offers large gains in task performance.