Bibliography:

  1. GANs Didn’t Fail, They Were Abandoned

  2. ‘neural net’ tag

  3. ‘BigGAN’ tag

  4. ‘data-augmented GANs’ tag

  5. ‘StyleGAN anime’ tag

  6. ‘StyleGAN’ tag

  7. ‘ProGAN’ tag

  8. ‘ML dataset’ tag

  9. ‘diffusion model’ tag

  10. ‘CLIP samples’ tag

  11. Research Ideas

  12. GANs Didn’t Fail, They Were Abandoned

  13. MaskBit: Embedding-free Image Generation via Bit Tokens

  14. SF-V: Single Forward Video Generation Model

  15. VideoGigaGAN: Towards Detail-rich Video Super-Resolution

  16. A Study in Dataset Pruning for Image Super-Resolution

  17. Hierarchical Feature Warping and Blending for Talking Head Animation

  18. APISR: Anime Production Inspired Real-World Anime Super-Resolution

  19. Re:Draw—Context Aware Translation as a Controllable Method for Artistic Production

  20. MobileDiffusion: Subsecond Text-to-Image Generation on Mobile Devices

  21. Adversarial Diffusion Distillation

  22. UFOGen: You Forward Once Large Scale Text-to-Image Generation via Diffusion GANs

  23. Application of Generative Adversarial Networks in Color Art Image Shadow Generation

  24. Region Assisted Sketch Colorization

  25. FlatGAN: A Holistic Approach for Robust Flat-Coloring in High-Definition with Understanding Line Discontinuity

  26. Consistency Trajectory Models (CTM): Learning Probability Flow ODE Trajectory of Diffusion

  27. The Colorization Based on Self-Attention Mechanism and GAN

  28. Generating tabular datasets under differential privacy

  29. Semi-supervised reference-based sketch extraction using a contrastive learning framework

  30. Semi-Implicit Denoising Diffusion Models (SIDDMs)

  31. StyleTTS 2: Towards Human-Level Text-to-Speech through Style Diffusion and Adversarial Training with Large Speech Language Models

  32. High-Fidelity Audio Compression with Improved RVQGAN

  33. Vocos: Closing the gap between time-domain and Fourier-based neural vocoders for high-quality audio synthesis

  34. Multi-Label Classification in Anime Illustrations Based on Hierarchical Attribute Relationships

  35. TANGO: Text-to-Audio Generation using Instruction-Tuned LLM and Latent Diffusion Model

  36. Thangka Sketch Colorization Based on Multi-Level Adaptive-Instance-Normalized Color Fusion and Skip Connection Attention

  37. Two-Step Training: Adjustable Sketch Colorization via Reference Image and Text Tag

  38. Abstraction-Perception Preserving Cartoon Face Synthesis

  39. Approaching an unknown communication system by latent space exploration and causal inference

  40. GigaGAN: Scaling up GANs for Text-to-Image Synthesis

  41. Overview of Cartoon Face Generation

  42. Enhancing Image Representation in Conditional Image Synthesis

  43. StencilTorch: An Iterative and User-Guided Framework for Anime Lineart Colorization

  44. PMSGAN: Parallel Multistage GANs for Face Image Translation

  45. FAEC-GAN: An unsupervised face-to-anime translation based on edge enhancement and coordinate attention

  46. A survey on text generation using generative adversarial networks

  47. Appearance-preserved Portrait-to-anime Translation via Proxy-guided Domain Adaptation

  48. Seeing a Rose in 5,000 Ways

  49. Reference Based Sketch Extraction via Attention Mechanism

  50. Dr.3D: Adapting 3D GANs to Artistic Drawings

  51. Null-text Inversion for Editing Real Images using Guided Diffusion Models

  52. An analysis: different methods about line art colorization

  53. Guiding Users to Where to Give Color Hints for Efficient Interactive Sketch Colorization via Unsupervised Region Prioritization

  54. High Fidelity Neural Audio Compression

  55. T2CI-GAN: Text to Compressed Image generation using Generative Adversarial Network

  56. GET3D: A Generative Model of High Quality 3D Textured Shapes Learned from Images

  57. Musika! Fast Infinite Waveform Music Generation

  58. Using Generative Adversarial Networks for Conditional Creation of Anime Posters

  59. AnimeSR: Learning Real-World Super-Resolution Models for Animation Videos

  60. Learning to Generate Artistic Character Line Drawing

  61. Cascaded Video Generation for Videos In-the-Wild

  62. StyleTTS: A Style-Based Generative Model for Natural and Diverse Text-to-Speech Synthesis

  63. Why GANs are overkill for NLP

  64. VQGAN-CLIP: Open Domain Image Generation and Editing with Natural Language Guidance

  65. Imitating, Fast and Slow: Robust learning from demonstrations via decision-time planning

  66. TATS: Long Video Generation with Time-Agnostic VQGAN and Time-Sensitive Transformer

  67. MaxViT: Multi-Axis Vision Transformer

  68. Vector-quantized Image Modeling with Improved VQGAN

  69. Truncated Diffusion Probabilistic Models and Diffusion-based Adversarial Autoencoders

  70. Do GANs learn the distribution? Some Theory and Empirics

  71. Using Constant Learning Rate of Two Time-Scale Update Rule for Training Generative Adversarial Networks

  72. Microdosing: Knowledge Distillation for GAN based Compression

  73. An unsupervised font style transfer model based on generative adversarial networks

  74. Multimodal Conditional Image Synthesis with Product-of-Experts GANs

  75. TnT Attacks! Universal Naturalistic Adversarial Patches Against Deep Neural Network Systems

  76. Compositional Transformers for Scene Generation

  77. Tackling the Generative Learning Trilemma with Denoising Diffusion GANs

  78. EditGAN: High-Precision Semantic Image Editing

  79. Projected GANs Converge Faster

  80. STransGAN: An Empirical Study on Transformer in GANs

  81. MSMT-GAN: Multi-Tailed, Multi-Headed, Spatial Dynamic Memory refined Text-to-Image Synthesis

  82. Unpaired font family synthesis using conditional generative adversarial networks

  83. Fake It Till You Make It: Face analysis in the wild using synthetic data alone

  84. MCL-GAN: Generative Adversarial Networks with Multiple Specialized Discriminators

  85. ViTGAN: Training GANs with Vision Transformers

  86. MLP Singer: Towards Rapid Parallel Korean Singing Voice Synthesis

  87. HiT: Improved Transformer for High-Resolution GANs

  88. GANs N’ Roses: Stable, Controllable, Diverse Image to Image Translation (works for videos too!)

  89. MixerGAN: An MLP-Based Architecture for Unpaired Image-to-Image Translation

  90. EigenGAN: Layer-Wise Eigen-Learning for GANs

  91. Image Super-Resolution via Iterative Refinement

  92. Deep Generative Modeling: A Comparative Review of VAEs, GANs, Normalizing Flows, Energy-Based and Autoregressive Models

  93. AniGAN: Style-Guided Generative Adversarial Networks for Unsupervised Anime Face Generation

  94. Improved Denoising Diffusion Probabilistic Models

  95. The Role of AI Attribution Knowledge in the Evaluation of Artwork

  96. XMC-GAN: Cross-Modal Contrastive Learning for Text-to-Image Generation

  97. Stylized-Colorization for Line Arts

  98. Taming Transformers for High-Resolution Image Synthesis

  99. VQ-GAN: Taming Transformers for High-Resolution Image Synthesis

  100. LDM: Automatic Colorization of Anime Style Illustrations Using a Two-Stage Generator

  101. dStyle-GAN: Generative Adversarial Network based on Writing and Photography Styles for Drug Identification in Darknet Markets

  102. Automatic Colorization of High-resolution Animation Style Line-art based on Frequency Separation and Two-Stage Generator

  103. Image Generators with Conditionally-Independent Pixel Synthesis

  104. RetinaGAN: An Object-aware Approach to Sim-to-Real Transfer

  105. Few-Shot Adaptation of Generative Adversarial Networks

  106. HiFi-GAN: Generative Adversarial Networks for Efficient and High Fidelity Speech Synthesis

  107. A Good Image Generator Is What You Need for High-Resolution Video Synthesis

  108. Why Spectral Normalization Stabilizes GANs: Analysis and Improvements

  109. Denoising Diffusion Probabilistic Models

  110. Improving GAN Training with Probability Ratio Clipping and Sample Reweighting

  111. Object Segmentation Without Labels with Large-Scale Generative Models

  112. Generative Adversarial Phonology: Modeling unsupervised phonetic and phonological learning with neural networks

  113. CiwGAN and fiwGAN: Encoding information in acoustic data to model lexical learning with Generative Adversarial Networks

  114. Learning to Simulate Dynamic Environments with GameGAN

  115. Reference-Based Sketch Image Colorization using Augmented-Self Reference and Dense Semantic Correspondence

  116. Learning to Simulate Dynamic Environments with GameGAN [homepage]

  117. MakeItTalk: Speaker-Aware Talking-Head Animation

  118. Avatar Artist Using GAN [CS230]

  119. PULSE: Self-Supervised Photo Upsampling via Latent Space Exploration of Generative Models

  120. Do We Need Zero Training Loss After Achieving Zero Training Error?

  121. E621 Face Dataset

  122. Smooth markets: A basic mechanism for organizing gradient-based learners

  123. microbatchGAN: Stimulating Diversity with Multi-Adversarial Discrimination

  124. StarGAN Based Facial Expression Transfer for Anime Characters

  125. Deep-Eyes: Fully Automatic Anime Character Colorization with Painting of Details on Empty Pupils

  126. Explorable Super Resolution

  127. PaintsTorch: a User-Guided Anime Line Art Colorization Tool With Double Generator Conditional Adversarial Network

  128. Generating Furry Face Art from Sketches using a GAN

  129. Interactive Anime Sketch Colorization with Style Consistency via a Deep Residual Neural Network

  130. Small-GAN: Speeding Up GAN Training Using Core-sets

  131. Parallel WaveGAN: A fast waveform generation model based on generative adversarial networks with multi-resolution spectrogram

  132. Tag2Pix: Line Art Colorization Using Text Tag With SECat and Changing Loss

  133. Anime Sketch Coloring with Swish-gated Residual U-net and Spectrally Normalized GAN (SSN-GAN)

  134. The Generative Adversarial Brain

  135. Training language GANs from Scratch

  136. Adversarial Examples Are Not Bugs, They Are Features

  137. Few-Shot Unsupervised Image-to-Image Translation

  138. COCO-GAN: Generation by Parts via Conditional Coordinating

  139. Compressing GANs using Knowledge Distillation

  140. How AI Training Scales

  141. InGAN: Capturing and Remapping the "DNA" of a Natural Image

  142. GAN-QP: A Novel GAN Framework without Gradient Vanishing and Lipschitz Constraint

  143. Language GANs Falling Short

  144. ESRGAN: Enhanced Super-Resolution Generative Adversarial Networks

  145. Twin-GAN: Unpaired Cross-Domain Image Translation with Weight-Sharing GANs

  146. IntroVAE: Introspective Variational Autoencoders for Photographic Image Synthesis

  147. Sem-GAN: Semantically-Consistent Image-to-Image Translation

  148. Cartoon Set

  149. The relativistic discriminator: a key element missing from standard GAN

  150. An empirical study on evaluation metrics of generative adversarial networks

  151. Bidirectional Learning for Robust Neural Networks

  152. GANomaly: Semi-Supervised Anomaly Detection via Adversarial Training

  153. Toward Diverse Text Generation with Inverse Reinforcement Learning

  154. Synthesizing Programs for Images using Reinforced Adversarial Learning

  155. A Variational Inequality Perspective on Generative Adversarial Networks

  156. ChatPainter: Improving Text to Image Generation using Dialogue

  157. Spectral Normalization for Generative Adversarial Networks

  158. Unsupervised Cipher Cracking Using Discrete GANs

  159. Which Training Methods for GANs do actually Converge?

  160. Two-stage Sketch Colorization

  161. RenderGAN: Generating Realistic Labeled Data

  162. CycleGAN, a Master of Steganography

  163. Multi-Content GAN for Few-Shot Font Style Transfer

  164. High-Resolution Image Synthesis and Semantic Manipulation with Conditional GANs

  165. Are GANs Created Equal? A Large-Scale Study

  166. AttnGAN: Fine-Grained Text to Image Generation with Attentional Generative Adversarial Networks

  167. StarGAN: Unified Generative Adversarial Networks for Multi-Domain Image-to-Image Translation

  168. Style Transfer in Text: Exploration and Evaluation

  169. XGAN: Unsupervised Image-to-Image Translation for Many-to-Many Mappings

  170. Mixed Precision Training

  171. GraspGAN: Using Simulation and Domain Adaptation to Improve Efficiency of Deep Robotic Grasping

  172. OptionGAN: Learning Joint Reward-Policy Options using Generative Adversarial Inverse Reinforcement Learning

  173. Training Shallow and Thin Networks for Acceleration via Knowledge Distillation with Conditional Adversarial Networks

  174. PassGAN: A Deep Learning Approach for Password Guessing

  175. Towards the Automatic Anime Characters Creation with Generative Adversarial Networks

  176. Learning Universal Adversarial Perturbations with Generative Models

  177. Semi-Supervised Haptic Material Recognition for Robots using Generative Adversarial Networks

  178. Vision-Based Multi-Task Manipulation for Inexpensive Robots Using End-To-End Learning from Demonstration

  179. CAN: Creative Adversarial Networks, Generating "Art" by Learning About Styles and Deviating from Style Norms

  180. Language Generation with Recurrent Generative Adversarial Networks without Pre-training

  181. Adversarial Ranking for Language Generation

  182. Objective-Reinforced Generative Adversarial Networks (ORGAN) for Sequence Generation Models

  183. Stabilizing Training of Generative Adversarial Networks through Regularization

  184. SD-GAN: Semantically Decomposing the Latent Spaces of Generative Adversarial Networks

  185. On Convergence and Stability of GANs

  186. Accelerating Science with Generative Adversarial Networks: An Application to 3D Particle Showers in Multi-Layer Calorimeters

  187. Outline Colorization through Tandem Adversarial Networks

  188. Adversarial Neural Machine Translation

  189. Improved Training of Wasserstein GANs

  190. CycleGAN: Unpaired Image-to-Image Translation using Cycle-Consistent Adversarial Networks

  191. Mastering Sketching: Adversarial Augmentation for Structured Prediction

  192. I2T2I: Learning Text to Image Synthesis with Textual Data Augmentation

  193. Improving Neural Machine Translation with Conditional Sequence Generative Adversarial Nets

  194. Learning to Discover Cross-Domain Relations with Generative Adversarial Networks

  195. ArtGAN: Artwork Synthesis with Conditional Categorical GANs

  196. Wasserstein GAN

  197. NIPS 2016 Tutorial: Generative Adversarial Networks

  198. Learning from Simulated and Unsupervised Images through Adversarial Training

  199. Generative Adversarial Parallelization

  200. Stacked Generative Adversarial Networks

  201. Plug & Play Generative Networks: Conditional Iterative Generation of Images in Latent Space

  202. Pix2Pix: Image-to-Image Translation with Conditional Adversarial Networks

  203. A Connection between Generative Adversarial Networks, Inverse Reinforcement Learning, and Energy-Based Models

  204. Connecting Generative Adversarial Networks and Actor-Critic Methods

  205. Neural Photo Editing with Introspective Adversarial Networks

  206. SeqGAN: Sequence Generative Adversarial Nets with Policy Gradient

  207. Photo-Realistic Single Image Super-Resolution Using a Generative Adversarial Network

  208. InfoGAN: Interpretable Representation Learning by Information Maximizing Generative Adversarial Nets

  209. Generative Adversarial Imitation Learning

  210. Improved Techniques for Training GANs

  211. Minibatch Discrimination

  212. Adversarial Feature Learning

  213. Generating images with recurrent adversarial networks

  214. Unsupervised Representation Learning with Deep Convolutional Generative Adversarial Networks

  215. Generative Adversarial Networks

  216. Meta-Font, Metamathematics, and Metaphysics: Comments on Donald Knuth’s Article ‘The Concept of a Meta-Font’

  217. Introducing AuraSR—An Open Reproduction of the GigaGAN Upscaler

  218. d5bea63fbb8381f4ef130a710441ac44b9e07f72.html

  219. Generating Large Images from Latent Vectors

  220. Learning to Write Programs That Generate Images

  221. 4fae9dea3b257c460b66e2ca2db071bd90ff860f.html

  222. Deconvolution and Checkerboard Artifacts

  223. TransGAN: Two Pure Transformers Can Make One Strong GAN, and That Can Scale Up

  224. Akanazawa/vgan: Code for Image Generation of Variational Discriminator Bottleneck: Improving Imitation Learning, Inverse RL, and GANs by Constraining Information Flow

  225. Akanimax/Variational_Discriminator_Bottleneck: Implementation (with Some Experimentation) of the Paper Titled "Variational Discriminator Bottleneck"

  226. MSG-GAN: Multi-Scale Gradients GAN (Architecture Inspired from ProGAN but Doesn’t Use Layer-Wise Growing)

  227. GAN-QP: A Novel GAN Framework without Gradient Vanishing and Lipschitz Constraint

  228. IntroVAE: A PyTorch Implementation of Paper ‘IntroVAE: Introspective Variational Autoencoders for Photographic Image Synthesis’

  229. Twin-GAN: Unpaired Cross-Domain Image Translation With Weight-Sharing GANs

  230. Junyanz/CycleGAN: Software That Can Generate Photos from Paintings, Turn Horses into Zebras, Perform Style Transfer, and More.

  231. Kevinlyu/DCGAN_Pytorch: DCGAN With Vanilla GAN and Least Square GAN Objective

  232. Martinarjovsky/WassersteinGAN

  233. Nolan-Dev/GANInterface: Tool to Interface With a StyleGAN Model

  234. Learning to Simulate Dynamic Environments With GameGAN (CVPR 2020)

  235. A Good Image Generator Is What You Need for High-Resolution Video Synthesis

  236. Yasinyazici/EMA_GAN

  237. Unpaired Image-To-Image Translation Using Cycle-Consistent Adversarial Networks

  238. Tour of the Sacred Library

  239. 0341285e41036d382ae5951c54dd0a6078a34716.html

  240. Image Generation

  241. Case Study: Interpreting, Manipulating, and Controlling CLIP With Sparse Autoencoders

  242. Steganography and the CycleGAN—Alignment Failure Case Study

  243. Welcome to Simulation City, the Virtual World Where Waymo Tests Its Autonomous Vehicles

  244. The Rise of Anime Generating AI

  245. [The Invention of GANs]

  246. Variational Discriminator Bottleneck: Improving Imitation Learning, Inverse RL, and GANs by Constraining Information Flow [Homepage]

  247. design#future-tag-features

    [Transclude the forward-link's context]

  248. 2023-begus-figure2-causaldisentanglementwithextremevaluesbysamplingextremeganlatentstointerpret.png

  249. 2023-xu-figure1-imagesamplesfromufogendiffusionganmodel.png

  250. 2023-xu-figure3-ufogenganfinetuningofdiffusionmodelschematictrainingillustration.png

  251. 2022-zhang-figure1-generatedexamplesofroses.png

  252. 2021-gwern-danbooru-sidebar-tagsbycategory.png

  253. 2020-06-11-gwern-danbooru2019-palms-upscaledrealhandsamples.jpg

  254. 2020-05-31-gwern-danbooru2019-palms-realhandsamples.jpg

  255. 2020-05-30-gwern-danbooru2019-figures-randomsamples-40.jpg

  256. 2020-05-15-gwern-hands-annotation-2hardexamples.png

  257. 2020-anokhin-figure2-schematicarchitectureofconditionallyindependentpixelsynthesisgangenerativemodel.png

  258. 2020-arfa-e621facedataset-cleaned-9x9previewgrid.jpg

  259. 2020-esser-vqgan-architectures.png

  260. 2019-10-17-gwern-introvae-512px-3epoches-samples.jpg

  261. 2019-09-13-gwern-sagantensorflow-asuka-epoch29minibatch3000.jpg

  262. 2019-03-23-gwern-danbooru2018-sfw-512px-trainingsamples.jpg

  263. 2019-03-18-makegirlsmoe-faces-16randomsamples.jpg

  264. 2019-03-11-venyes-reddit-twdnethanosmeme.png

  265. 2019-02-22-dinosarden-twitter-twdnecollage.jpg

  266. 2018-12-25-gwern-vgan-animefaces.jpg

  267. 2018-12-15-gwern-msggan-asukafaces-gen92_60.jpg

  268. 2018-11-21-gwern-ganqp-asukafaces-10400.jpg

  269. 2018-08-18-gwern-sagantensorflow-wholeasuka-epoch26minibatch4500.png

  270. 2018-08-02-gwern-glow-asukafaces-epoch5sample7.jpg

  271. 2018-07-18-gwern-128px-sagantensorflow-wholeasuka-trainingmontage.mp4

  272. 2018-01-04-gwern-wgan-asukafaces-2100.jpg

  273. 2018-mccandlish-openai-howaitrainingscales-gradientnoisescale-paretofrontier.svg

  274. 2017-xu-figure6-attnganopendomainexamplesonmscoco.png

  275. gwern-danbooru2019-512px-samples.jpg

  276. gwern-danbooru2017-512px-samples.jpg

  277. gwern-danbooru2017-512px-face-samples.jpg

  278. https://aclanthology.org/D18-1428/

  279. https://github.com/CompVis

  280. https://haydn.fgl.dev/posts/the-launch-of-waifuxl/

  281. aec9f335b9aed5d8757a46e3560f739564cf83ee.html

  282. https://paperswithcode.com/sota/text-to-image-generation-on-coco

  283. https://research.google/blog/mobilediffusion-rapid-text-to-image-generation-on-device/

  284. https://research.google/blog/toward-generalized-sim-to-real-transfer-for-robot-learning/

  285. https://towardsdatascience.com/african-masks-gans-tpu-9a6b0cf3105c

  286. a41e8abfd08c50710a4dc3f64e7c14815f77948a.html

  287. https://www.maskaravivek.com/post/gan-synthetic-data-generation/

  288. 9e486948ca931eeb1d863351c29e0470f2c0b695.html

  289. https://x.com/danielrussruss/status/1482567887395065856

  290. https://x.com/jd_pressman/status/1468007754144960514

  291. https://x.com/search?f=tweets&vertical=default&q=BigGAN&src=typd

  292. 73510941e4b8268fb09840e77c3bb34b25c3943d.html

  293. MaskBit: Embedding-free Image Generation via Bit Tokens

  294. https%253A%252F%252Farxiv.org%252Fabs%252F2409.16211%2523bytedance.html

  295. Adversarial Diffusion Distillation

  296. Robin Rombach

  297. https%253A%252F%252Farxiv.org%252Fabs%252F2311.17042%2523stability.html

  298. UFOGen: You Forward Once Large Scale Text-to-Image Generation via Diffusion GANs

  299. https%253A%252F%252Farxiv.org%252Fabs%252F2311.09257%2523google.html

  300. Consistency Trajectory Models (CTM): Learning Probability Flow ODE Trajectory of Diffusion

  301. Stefano Ermon

  302. https%253A%252F%252Farxiv.org%252Fabs%252F2310.02279%2523sony.html

  303. StyleTTS 2: Towards Human-Level Text-to-Speech through Style Diffusion and Adversarial Training with Large Speech Language Models

  304. https%253A%252F%252Farxiv.org%252Fabs%252F2306.07691.html

  305. TANGO: Text-to-Audio Generation using Instruction-Tuned LLM and Latent Diffusion Model

  306. https%253A%252F%252Farxiv.org%252Fabs%252F2304.13731.html

  307. Approaching an unknown communication system by latent space exploration and causal inference

  308. https%253A%252F%252Farxiv.org%252Fabs%252F2303.10931.html

  309. GigaGAN: Scaling up GANs for Text-to-Image Synthesis

  310. https%253A%252F%252Farxiv.org%252Fabs%252F2303.05511%2523adobe.html

  311. A survey on text generation using generative adversarial networks

  312. https%253A%252F%252Farxiv.org%252Fabs%252F2212.11119.html

  313. High Fidelity Neural Audio Compression

  314. https%253A%252F%252Farxiv.org%252Fabs%252F2210.13438%2523facebook.html

  315. Using Generative Adversarial Networks for Conditional Creation of Anime Posters

  316. %252Fdoc%252Fai%252Fanime%252F2022-sankalpa.pdf.html

  317. TATS: Long Video Generation with Time-Agnostic VQGAN and Time-Sensitive Transformer

  318. https%253A%252F%252Farxiv.org%252Fabs%252F2204.03638%2523facebook.html

  319. Vector-quantized Image Modeling with Improved VQGAN

  320. https%253A%252F%252Farxiv.org%252Fabs%252F2110.04627%2523google.html

  321. Projected GANs Converge Faster

  322. https%253A%252F%252Farxiv.org%252Fabs%252F2111.01007.html

  323. ViTGAN: Training GANs with Vision Transformers

  324. https%253A%252F%252Farxiv.org%252Fabs%252F2107.04589.html

  325. HiT: Improved Transformer for High-Resolution GANs

  326. https%253A%252F%252Farxiv.org%252Fabs%252F2106.07631%2523google.html

  327. Image Super-Resolution via Iterative Refinement

  328. Jonathan Ho

  329. William Chan

  330. Tim Salimans

  331. https%253A%252F%252Farxiv.org%252Fabs%252F2104.07636%2523google.html

  332. Improved Denoising Diffusion Probabilistic Models

  333. https%253A%252F%252Farxiv.org%252Fabs%252F2102.09672%2523openai.html

  334. XMC-GAN: Cross-Modal Contrastive Learning for Text-to-Image Generation

  335. https%253A%252F%252Farxiv.org%252Fabs%252F2101.04702%2523google.html

  336. Image Generators with Conditionally-Independent Pixel Synthesis

  337. https%253A%252F%252Farxiv.org%252Fabs%252F2011.13775.html

  338. E621 Face Dataset

  339. https://x.com/arfafax

  340. https%253A%252F%252Fgithub.com%252Farfafax%252FE621-Face-Dataset.html

  341. Parallel WaveGAN: A fast waveform generation model based on generative adversarial networks with multi-resolution spectrogram

  342. https%253A%252F%252Farxiv.org%252Fabs%252F1910.11480%2523naver.html

  343. How AI Training Scales

  344. Sam McCandlish

  345. Jared Kaplan

  346. https%253A%252F%252Fopenai.com%252Fresearch%252Fhow-ai-training-scales.html

  347. Language GANs Falling Short

  348. https%253A%252F%252Farxiv.org%252Fabs%252F1811.02549.html

  349. CycleGAN: Unpaired Image-to-Image Translation using Cycle-Consistent Adversarial Networks

  350. https%253A%252F%252Farxiv.org%252Fabs%252F1703.10593%2523bair.html

  351. Minibatch Discrimination

  352. Tim Salimans

  353. Alec Radford

  354. https%253A%252F%252Farxiv.org%252Fpdf%252F1606.03498%2523page%253D3%2526org%253Dopenai.html

  355. Meta-Font, Metamathematics, and Metaphysics: Comments on Donald Knuth’s Article ‘The Concept of a Meta-Font’

  356. %252Fdoc%252Fdesign%252Ftypography%252F1982-hofstadter.pdf.html