âDiffusion Transformers (DiTs): Scalable Diffusion Models With Transformersâ, 2022-12-19 ()â :
We explore a new class of diffusion models based on the transformer architecture.
We train latent diffusion models of images, replacing the commonly-used U-Net backbone with a transformer that operates on latent patches.
We analyze the scalability of our Diffusion Transformers (DiTs) through the lens of forward pass complexity as measured by GFLOPS.
We find that DiTs with higher GFLOPSâthrough increased transformer depth/width or increased number of input tokensâconsistently have lower FID.
In addition to possessing good scalability properties, our largest DiT-XL/2 models outperform all prior diffusion models on the class-conditional ImageNet 512Ă512 and 256Ă256 benchmarks, achieving a state-of-the-art FID of 2.27 on the latter.