“Variational Autoencoders Without the Variation”, Gregory A. Daly, Jonathan E. Fieldsend, Gavin Tabor2022-03-01 (, , ; similar)⁠:

Variational autoencoders (VAE) are a popular approach to generative modeling. However, exploiting the capabilities of VAEs in practice can be difficult. Recent work on regularized and entropic autoencoders have begun to explore the potential, for generative modeling, of removing the variational approach and returning to the classic deterministic autoencoder (DAE) with additional novel regularization methods.

In this paper we empirically explore the capability of DAEs for image generation without additional novel methods and the effect of the implicit regularization and smoothness of large networks.

We find that DAEs can be used successfully for image generation without additional loss terms, and that many of the useful properties of VAEs can arise implicitly from sufficiently large convolutional encoders and decoders when trained on CIFAR-10 and CelebA.