[Paper; blog] We show that cascaded diffusion models are capable of generating high fidelity images on the class-conditional ImageNet generation challenge, without any assistance from auxiliary image classifiers to boost sample quality.
A cascaded diffusion model comprises a pipeline of multiple diffusion models that generate images of increasing resolution, beginning with a standard diffusion model at the lowest resolution, followed by one or more super-resolution diffusion models that successively upsample the image and add higher resolution details.
We find that the sample quality of a cascading pipeline relies crucially on conditioning augmentation, our proposed method of data augmentation of the lower resolution conditioning inputs to the super-resolution models.
Our experiments show that conditioning augmentation prevents compounding error during sampling in a cascaded model, helping us to train cascading pipelines achieving FID scores of 1.48 at 64×64, 3.52 at 128×128 and 4.88 at 256×256 resolutions, outperforming BigGAN-deep, and classification accuracy scores of 63.02% (top-1) and 84.06% (top-5) at 256×256, outperforming VQ-VAE-2.
Samples from denoising diffusion probabilistic models trained on CelebA-HQ, LSUN Bedrooms, LSUN church and LSUN cat datasets at 256×256 resolution: Selected generated images from our 256×256 class-conditional ImageNet model.
Cascaded Diffusion Models (CDM) are pipelines of diffusion models that generate images of increasing resolution.
CDMs yield high fidelity samples superior to BigGAN-deep and VQ-VAE-2 in terms of both FID score and classification accuracy score on class-conditional ImageNet generation.
These results are achieved with pure generative models without any classifier.
We introduce conditioning augmentation, our data augmentation technique that we find critical towards achieving high sample fidelity.
…Concurrently, Dhariwal and Nichol showed that their diffusion models, named ADM, also outperform GANs on ImageNet generation. ADM achieves this result using classifier guidance, which boosts sample quality by modifying the diffusion sampling procedure to simultaneously maximize the score of an extra image classifier. As measured by FID score, ADM with classifier guidance outperforms our reported results, but our reported results outperform ADM without classifier guidance.
Our work is a demonstration of the effectiveness of pure generative models, namely cascaded diffusion models without the assistance of extra image classifiers. Nonetheless, classifier guidance and cascading are complementary techniques for improving sample quality, and a detailed investigation of how they interact is warranted.