“VoxLingua107: a Dataset for Spoken Language Recognition”, Jörgen Valk, Tanel Alumäe2020-11-25 (; backlinks)⁠:

This paper investigates the use of automatically collected web audio data for the task of spoken language recognition. We generate semi-random search phrases from language-specific Wikipedia data that are then used to retrieve videos from YouTube for 107 languages.

Speech activity detection and speaker diarization are used to extract segments from the videos that contain speech. Post-filtering is used to remove segments from the database that are likely not in the given language, increasing the proportion of correctly labeled segments to 98%, based on crowd-sourced verification.

The size of the resulting training set (VoxLingua107) is 6,628 hours (62 hours per language on average) and it is accompanied by an evaluation set of 1,609 verified utterances. We use the data to build language recognition models for several spoken language identification tasks. Experiments show that using the automatically retrieved training data gives competitive results to using hand-labeled proprietary datasets.

The dataset is publicly available.