Bibliography (3):

  1. LLaMA-2: Open Foundation and Fine-Tuned Chat Models

  2. LLaMa-1: Open and Efficient Foundation Language Models

  3. Averaging Weights Leads to Wider Optima and Better Generalization