β€œAn Empirical Exploration in Quality Filtering of Text Data”, Leo Gao2021-09-02 (, ; similar)⁠:

[cf. Sorscher et al 2022] While conventional wisdom suggests that more aggressively filtering data from low-quality sources like Common Crawl always monotonically improves the quality of training data, we find that aggressive filtering can in fact lead to a decrease in model quality on a wide array of downstream tasks for a GPT-like language model.

We speculate that this is because optimizing sufficiently strongly for a proxy metric harms performance on the true objective, suggesting a need for more robust filtering objectives when attempting to filter more aggressively.

We hope this work leads to detailed analysis of the effects of dataset filtering design choices on downstream model performance in future work.