Bibliography (6):

  1. https://models.aminer.cn/cogvideo/

  2. THUDM/CogVideo: Text-To-Video Generation. The Repo for ICLR2023 Paper ‘CogVideo: Large-Scale Pretraining for Text-To-Video Generation via Transformers’

  3. https://github.com/THUDM/CogVideo#download

  4. GPT-3: Language Models are Few-Shot Learners

  5. CogView: Mastering Text-to-Image Generation via Transformers

  6. CogView2: Faster and Better Text-to-Image Generation via Hierarchical Transformers