Bibliography (4):

  1. Implicit Chain-of-Thought Reasoning via Knowledge Distillation

  2. Chain-of-Thought Prompting Elicits Reasoning in Large Language Models

  3. Language Models are Unsupervised Multitask Learners

  4. Training Verifiers to Solve Math Word Problems