‘data-augmented GANs’ tag
- See Also
-
Links
- “Idempotent Generative Network”, Shocher et al 2023
- “A Cookbook of Self-Supervised Learning”, Balestriero et al 2023
- “Text-Only Training for Image Captioning Using Noise-Injected CLIP”, Nukrai et al 2022
- “BigVGAN: A Universal Neural Vocoder With Large-Scale Training”, Lee et al 2022
- “Diffusion-GAN: Training GANs With Diffusion”, Wang et al 2022
- “InvGAN: Invertable GANs”, Ghosh et al 2021
- “FuseDream: Training-Free Text-To-Image Generation With Improved CLIP+GAN Space Optimization”, Liu et al 2021
- “CDM: Cascaded Diffusion Models for High Fidelity Image Generation”, Ho et al 2021
- “Training GANs With Stronger Augmentations via Contrastive Discriminator (ContraD)”, Jeong & Shin 2021
- “TransGAN: Two Transformers Can Make One Strong GAN”, Jiang et al 2021
- “Contrastive Representation Learning: A Framework and Review”, Khac et al 2020
- “Towards Faster and Stabilized GAN Training for High-Fidelity Few-Shot Image Synthesis”, Anonymous 2020
- “Differentiable Augmentation for Data-Efficient GAN Training”, Zhao et al 2020
- “StyleGAN2-ADA: Training Generative Adversarial Networks With Limited Data”, Karras et al 2020
- “On Data Augmentation for GAN Training”, Tran et al 2020
- “Image Augmentations for GAN Training”, Zhao et al 2020
- “Anime Crop Datasets: Faces, Figures, & Hands”, Gwern et al 2020
- “Practical Aspects of StyleGAN2 Training”, l4rz 2020
- “A U-Net Based Discriminator for Generative Adversarial Networks”, Schönfeld et al 2020
- “Improved Consistency Regularization for GANs”, Zhao et al 2020
- “Improved Consistency Regularization for GANs § 2.1 Balanced Consistency Regularization (bCR)”, Zhao 2020 (page 2 org google)
- Wikipedia
- Miscellaneous
- Bibliography
See Also
Links
“Idempotent Generative Network”, Shocher et al 2023
“A Cookbook of Self-Supervised Learning”, Balestriero et al 2023
“Text-Only Training for Image Captioning Using Noise-Injected CLIP”, Nukrai et al 2022
Text-Only Training for Image Captioning using Noise-Injected CLIP
“BigVGAN: A Universal Neural Vocoder With Large-Scale Training”, Lee et al 2022
BigVGAN: A Universal Neural Vocoder with Large-Scale Training
“Diffusion-GAN: Training GANs With Diffusion”, Wang et al 2022
“InvGAN: Invertable GANs”, Ghosh et al 2021
“FuseDream: Training-Free Text-To-Image Generation With Improved CLIP+GAN Space Optimization”, Liu et al 2021
FuseDream: Training-Free Text-to-Image Generation with Improved CLIP+GAN Space Optimization
“CDM: Cascaded Diffusion Models for High Fidelity Image Generation”, Ho et al 2021
CDM: Cascaded Diffusion Models for High Fidelity Image Generation
“Training GANs With Stronger Augmentations via Contrastive Discriminator (ContraD)”, Jeong & Shin 2021
Training GANs with Stronger Augmentations via Contrastive Discriminator (ContraD)
“TransGAN: Two Transformers Can Make One Strong GAN”, Jiang et al 2021
“Contrastive Representation Learning: A Framework and Review”, Khac et al 2020
“Towards Faster and Stabilized GAN Training for High-Fidelity Few-Shot Image Synthesis”, Anonymous 2020
Towards Faster and Stabilized GAN Training for High-fidelity Few-shot Image Synthesis
“Differentiable Augmentation for Data-Efficient GAN Training”, Zhao et al 2020
“StyleGAN2-ADA: Training Generative Adversarial Networks With Limited Data”, Karras et al 2020
StyleGAN2-ADA: Training Generative Adversarial Networks with Limited Data
“On Data Augmentation for GAN Training”, Tran et al 2020
“Image Augmentations for GAN Training”, Zhao et al 2020
“Anime Crop Datasets: Faces, Figures, & Hands”, Gwern et al 2020
“Practical Aspects of StyleGAN2 Training”, l4rz 2020
“A U-Net Based Discriminator for Generative Adversarial Networks”, Schönfeld et al 2020
A U-Net Based Discriminator for Generative Adversarial Networks
“Improved Consistency Regularization for GANs”, Zhao et al 2020
“Improved Consistency Regularization for GANs § 2.1 Balanced Consistency Regularization (bCR)”, Zhao 2020 (page 2 org google)
Improved Consistency Regularization for GANs § 2.1 Balanced Consistency Regularization (bCR):
Wikipedia
Miscellaneous
Bibliography
-
https://arxiv.org/abs/2206.04658#nvidia
: “BigVGAN: A Universal Neural Vocoder With Large-Scale Training”, -
https://arxiv.org/abs/2112.01573
: “FuseDream: Training-Free Text-To-Image Generation With Improved CLIP+GAN Space Optimization”, -
https://cascaded-diffusion.github.io/
: “CDM: Cascaded Diffusion Models for High Fidelity Image Generation”, -
https://arxiv.org/abs/2102.07074
: “TransGAN: Two Transformers Can Make One Strong GAN”, -
https://arxiv.org/abs/2006.10738
: “Differentiable Augmentation for Data-Efficient GAN Training”, -
https://arxiv.org/abs/2002.12655
: “A U-Net Based Discriminator for Generative Adversarial Networks”, -
https://arxiv.org/abs/2002.04724
: “Improved Consistency Regularization for GANs”,