Bibliography (16):

  1. GPT-3 Creative Fiction § BPEs

  2. DALL·E 2: Hierarchical Text-Conditional Image Generation with CLIP Latents § 7. Limitations and Risks

  3. Models In a Spelling Bee: Language Models Implicitly Learn the Character Composition of Tokens

  4. ByT5 model for massively multilingual grapheme-to-phoneme conversion

  5. T5: Exploring the Limits of Transfer Learning with a Unified Text-to-Text Transformer

  6. PaLM: Scaling Language Modeling with Pathways

  7. Imagen: Photorealistic Text-to-Image Diffusion Models with Deep Language Understanding

  8. https://arxiv.org/pdf/2212.10562.pdf#page=3&org=google

  9. Stable Diffusion Public Release

  10. DALL·E 1: Creating Images from Text: We’ve trained a neural network called DALL·E that creates images from text captions for a wide range of concepts expressible in natural language

  11. Hierarchical Text-Conditional Image Generation with CLIP Latents

  12. https://parti.research.google/

  13. eDiff-I: Text-to-Image Diffusion Models with an Ensemble of Expert Denoisers

  14. mT5: A massively multilingual pre-trained text-to-text transformer

  15. ByT5: Towards a token-free future with pre-trained byte-to-byte models