“Improving Visual Quality of Image Synthesis by A Token-Based Generator With Transformers”, Yanhong Zeng, Huan Yang, Hongyang Chao, Jianbo Wang, Jianlong Fu2021-11-05 (, ; similar)⁠:

We present a new perspective of achieving image synthesis by viewing this task as a visual token generation problem. Different from existing paradigms that directly synthesize a full image from a single input (eg. a latent code), the new formulation enables a flexible local manipulation for different image regions, which makes it possible to learn content-aware and fine-grained style control for image synthesis.

Specifically, it takes as input a sequence of latent tokens to predict the visual tokens for synthesizing an image. Under this perspective, we propose a token-based generator (ie. TokenGAN). Particularly, the TokenGAN inputs two semantically different visual tokens, ie. the learned constant content tokens and the style tokens from the latent space. Given a sequence of style tokens, the TokenGAN is able to control the image synthesis by assigning the styles to the content tokens by attention mechanism with a Transformer.

We conduct extensive experiments and show that the proposed TokenGAN has achieved state-of-the-art results on several widely-used image synthesis benchmarks, including FFHQ and LSUN CHURCH with different resolutions. In particular, the generator is able to synthesize high-fidelity images with 1,024×1,024 size, dispensing with convolutions entirely.