“Towards Faster and Stabilized GAN Training for High-Fidelity Few-Shot Image Synthesis”, Anonymous2020-09-28 (, ; similar)⁠:

A computational-efficient GAN for few-shot hi-fi image dataset (converge on single GPU with few hours’ training, on <100 1024px images).

Training Generative Adversarial Networks (GAN) on high-fidelity images usually requires large-scale GPU-clusters and a vast number of training images. In this paper, we study the few-shot image synthesis task for GAN with minimum computing cost. We propose a light-weight GAN structure that gains superior quality on 1,024×1,024px resolution. Notably, the model converges from scratch with just a few hours of training on a single RTX-2080 GPU; and has a consistent performance, even with less than 100 training samples. 2 technique designs constitute our work, a skip-layer channel-wise excitation module and a self-supervised discriminator trained as a feature-encoder. With 13 datasets covering a wide variety of image domains, we show our model’s robustness and its superior performance compared to the state-of-the-art StyleGAN2.

[Keywords: deep learning, generative model, image synthesis, few-shot learning, generative adversarial network, self-supervised learning, unsupervised learning]