“SDXL: Improving Latent Diffusion Models for High-Resolution Image Synthesis”, 2023-07-04 ():
We present SDXL, a latent diffusion model for text-to-image synthesis. Compared to previous versions of Stable Diffusion, SDXL leverages a 3× larger U-Net backbone. The increase of model parameters is mainly due to more attention blocks and a larger cross-attention context as SDXL uses a second text encoder.
We design multiple novel conditioning schemes and train SDXL on multiple aspect ratios. We also introduce a refinement model which is used to improve the visual fidelity of samples generated by SDXL using a post-hoc image-to-image technique.
We demonstrate that SDXL shows drastically improved performance compared the previous versions of Stable Diffusion and achieves results competitive with those of black-box state-of-the-art image generators…Failure cases of SDXL despite large improvements compared to previous versions of Stable Diffusion, the model sometimes still struggles with very complex prompts involving detailed spatial arrangements and detailed descriptions (e.g. top left example)…Additionally, while our model represents a large advancement over previous iterations of SD, it still encounters difficulties when rendering long, legible text. Occasionally, the generated text may contain random characters or exhibit inconsistencies. [Due to use of CLIP+BPEs still, unlike Deep Floyd using T5 for its LM.] …Overall, there is a slight preference for SDXL over Midjourney in terms of prompt adherence…In 4⁄6 PartiPrompts categories SDXL outperforms Midjourney, and in 7⁄10 challenges there is no statistically-significant difference between both models or SDXL outperforms Midjourney.
In the spirit of promoting open research and fostering transparency in large model training and evaluation, we provide access to code and model weights at https://github.com/Stability-AI/generative-models.