“Unnatural Instructions: Tuning Language Models With (Almost) No Human Labor”, Or Honovich, Thomas Scialom, Omer Levy, Timo Schick2022-12-19 (, , , )⁠:

Instruction tuning enables pretrained language models to perform new tasks from inference-time natural language descriptions. These approaches rely on vast amounts of human supervision in the form of crowdsourced datasets or user interactions.

In this work, we introduce Unnatural Instructions: a large dataset of creative and diverse instructions, collected with virtually no human labor.

We collect 64,000 examples by prompting a language model with 3 seed examples of instructions and eliciting a fourth. This set is then expanded by prompting the model to rephrase each instruction, creating a total of ~240,000 examples of instructions, inputs, and outputs.

Experiments show that despite containing a fair amount of noise, training on Unnatural Instructions rivals the effectiveness of training on open-source manually-curated datasets, surpassing the performance of models such as T0++ and Tk-Instruct across various benchmarks.

These results demonstrate the potential of model-generated data as a cost-effective alternative to crowdsourcing for dataset expansion and diversification.