âStyleGAN-XL: Scaling StyleGAN to Large Diverse Datasetsâ, 2022-02-01 (; backlinks; similar)â :
[video; code] Computer graphics has experienced a recent surge of data-centric approaches for photorealistic and controllable content creation. StyleGAN in particular sets new standards for generative modeling regarding image quality and controllability. However, StyleGANâs performance severely degrades on large unstructured datasets such as ImageNet. StyleGAN was designed for controllability; hence, prior works suspect its restrictive design to be unsuitable for diverse datasets.
In contrast, we find the main limiting factor to be the current training strategy. Following the recently introduced Projected GAN paradigm, we leverage powerful neural network priors and a progressive growing strategy to successfully train the latest StyleGAN3 generator on ImageNet.
Our final model, StyleGAN-XL, sets a new state-of-the-art on large-scale image synthesis and is the first to generate ImageNet images at a resolution of 10242 at such a dataset scale.
We demonstrate that this model can invert and edit images beyond the narrow domain of portraits or specific object classes.
âŚOur contributions enable us to train a much larger model than previously possible while requiring less computation than prior art. Our model is 3Ă larger in terms of depth and parameter count than a standard StyleGAN3. However, to match the prior state-of-the-art performance of ADM at a resolution of 5122 pixels, training the models on a single NVIDIA Tesla V100 takes 400 GPU-days compared to the previously required 1,914 V100-days.