Combining my cross-model interpolation with @Buntworthy 's layer swapping idea. Here the different resolution layers are being interpolated at different rates between furry, FFHQ, and @KitsuneKey's foxes. p0 is 4x4 and 8x8, p1 is 16x16 to 128x128, and p2 is 256x256 to 512x512.

Aug 23, 2020 · 11:46 PM UTC

Here's my notebook for generating these. colab.research.google.com/dr…
Replying to @arfafax @kitsunekey
What are you doing with the mapping network parameters in this case, do they just stay as fox? I'm wondering (I haven't tested yet) if they make much difference, or if you should use different mapping networks for the different resolutions. Any thoughts/tests?
I thought about that too, but I wasn't sure what to do about them, so they're staying the same in these videos. My intuition is that interpolating just the mapping network would look similar to just interpolating the latent vector. It's probably worth testing, though.
Replying to @arfafax @kitsunekey
Hey I'm putting together a quick blog post on this stuff, do you mind if I grab some stills from your video as an example? (With credit ofc)
Oh dear me. This is some exciting stuff mixed with the uncanny creeps. Lol nice work!
This tweet is unavailable