Bibliography (10):

  1. Attention Is All You Need

  2. Vision Transformer: An Image is Worth 16×16 Words: Transformers for Image Recognition at Scale

  3. MLP-Mixer: An all-MLP Architecture for Vision

  4. Deep Residual Learning for Image Recognition

  5. https://github.com/locuslab/convmixer

  6. This Section Presents an Expanded (But Still Quite Compact) Version of the Terse ConvMixer Implementation That We Presented in the Paper. The Code Is given in **Figure 7**. We Also Present an Even More Terse Implementation in **Figure 8**, Which to the Best of Our Knowledge Is the First Model That Achieves the Elusive Dual Goals of 80%+ ImageNet Top-1 Accuracy While Also Fitting into a Tweet.

  7. https://x.com/zhansheng/status/1446145168579743746

  8. https://x.com/ashertrockman/status/1486059382211330051

  9. https://x.com/BlancheMinerva/status/1632117587696812033

  10. Wikipedia Bibliography:

    1. Code golf