Bibliography (15):

  1. Sustainable AI: Environmental Implications, Challenges and Opportunities

  2. ‘end-to-end’ directory

  3. https://pytorch.org/blog/a-better-transformer-for-fast-transformer-encoder-inference/

  4. Unsupervised Cross-lingual Representation Learning at Scale

  5. General Purpose Text Embeddings from Pre-trained Language Models for Scalable Inference

  6. MViT: Multiscale Vision Transformers

  7. Masked Autoencoders As Spatiotemporal Learners

  8. Introducing the AI Research SuperCluster—Facebook’s cutting-edge AI supercomputer for AI research

  9. Muppet: Massive Multi-task Representations with Pre-Finetuning

  10. Building Machine Translation Systems for the Next Thousand Languages

  11. PIXEL: Language Modeling with Pixels

  12. GrokNet: Unified Computer Vision Model Trunk and Embeddings For Commerce