Bibliography (30):

  1. TalkRL: The Reinforcement Learning Podcast: Aravind Srinivas 2: Aravind Srinivas, Research Scientist at OpenAI, Returns to Talk Decision Transformer, VideoGPT, Choosing Problems, and Explore vs Exploit in Research Careers

  2. ODT: Online Decision Transformer

  3. Attention Is All You Need

  4. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding

  5. Decision Transformer: Reinforcement Learning via Sequence Modeling

  6. GPT-3 Creative Fiction § Prompts As Programming

  7. Openai/gym: A Toolkit for Developing and Comparing Reinforcement Learning Algorithms.

  8. https://kzl.github.io/assets/decision_transformer.pdf

  9. https://github.com/kzl/decision-transformer

  10. MuZero: Mastering Atari, Go, Chess and Shogi by Planning with a Learned Model

  11. Reinforcement Learning Upside Down: Don’t Predict Rewards—Just Map Them to Actions

  12. Learning Relative Return Policies With Upside-Down Reinforcement Learning

  13. A Very Unlikely Chess Game

  14. Transformers Play Chess

  15. The Value Equivalence Principle for Model-Based Reinforcement Learning

  16. Shaking the foundations: delusions in sequence models for interaction and control

  17. Trajectory Transformer: Reinforcement Learning as One Big Sequence Modeling Problem

  18. GPT-2 Preference Learning for Music Generation § Decision Transformers: Preference Learning As Simple As Possible

  19. rnn-metadata#inline-metadata-trick

    [Transclude the forward-link's context]

  20. CTRL: A Conditional Transformer Language Model For Controllable Generation

  21. Towards a Human-like Open-Domain Chatbot

  22. Controllable Generation from Pre-trained Language Models via Inverse Prompting

  23. https://architext.design/about/

  24. DALL·E 1: Creating Images from Text: We’ve trained a neural network called DALL·E that creates images from text captions for a wide range of concepts expressible in natural language

  25. CLIP: Connecting Text and Images: We’re introducing a neural network called CLIP which efficiently learns visual concepts from natural language supervision. CLIP can be applied to any visual classification benchmark by simply providing the names of the visual categories to be recognized, similar to the ‘zero-shot’ capabilities of GPT-2 and GPT-3

  26. CogView: Mastering Text-to-Image Generation via Transformers

  27. Choose-Your-Own-Adventure AI Dungeon Games