Cross-model interpolations are one of those neat hidden features that arise from transfer learning. Here I'm interpolating between 5 StyleGAN2 models: furry, FFHQ, anime, ponies, and @KitsuneKey's fox model. All were trained off the same base model, which makes blending possible.

Aug 19, 2020 · 2:01 PM UTC

The intuition behind this should be clear to anyone who's ever looked at the fakes####.png files the base StyleGAN2 repo spits out during training, or scrubbed through the images on Tensorboard. You're seeing different snapshots of the model through time as it trains.
So, obviously, doing a weighted average of the two endpoints should allow you to smoothly transition between the different domains. This wouldn't work on two models both trained from random starting points though, because there isn't a clear path between the two endpoints.
Here's a version with fixed latents all the way through. Only the model weights are being changed.
Replying to @arfafax @kitsunekey
So you trained furries out of FFHQ which was scaled down to 512px? and now you can network-blend them? That's awesome 👏
Replying to @arfafax @kitsunekey
@DeborahSimpier I thought you’d get a kick out of this horrifying shit
Replying to @arfafax @kitsunekey
Tested the idea on my own GAN, it works. Now I can interpolate between checkpoints. Which is where the best results usually are :)
Replying to @arfafax @kitsunekey
Jesus christ this is amazing and horrifying xD