Bibliography (8):

  1. What Can Transformers Learn In-Context? A Case Study of Simple Function Classes

  2. What exactly has TabPFN learned to do?

  3. Attention Is All You Need

  4. PFNs: Transformers Can Do Bayesian Inference

  5. OpenML Benchmarking Suites

  6. Model soups: averaging weights of multiple fine-tuned models improves accuracy without increasing inference time

  7. ‘dynamic evaluation (NN)’ directory

  8. Wikipedia Bibliography:

    1. Bayesian statistics