“CartoonizeDiff: Diffusion-Based Photo Cartoonization Scheme”, 2024-02-18 ():
Photo cartoonization seeks to create cartoon-style images from photos of real-life scenes. So far, diverse deep learning-based methods have been proposed to automate photo cartoonization. However, they tend to oversimplify high-frequency patterns, resulting in images that look like abstractions rather than a true animation style.
To alleviate this problem, this paper proposes CartoonizeDiff, a new photo cartoonization method based on diffusion model and ControlNet. In the proposed method, Color Canny ControlNet and Reflect ControlNet are appended to a pretrained latent diffusion model to preserve the color, structure, and fine details of photos for better cartoonization.
Through extensive experiments on animation backgrounds and real-world landscape datasets, we demonstrate that the proposed method quantitatively and qualitatively outperforms existing methods.