Bibliography (8):

  1. LLaMa-1: Open and Efficient Foundation Language Models

  2. https://ai.meta.com/research/publications/hubert-pretraining-large-audio-transformers/

  3. https://arxiv.org/abs/2110.13900

  4. https://ai.meta.com/research/publications/whisper-an-unsupervised-speech-pretraining-method/