The new StyleGAN code is super neat. You can cross-breed latent vectors to get a sort of style transfer effect.

Feb 6, 2019 · 4:11 PM UTC

Instead of sending the latents directly to the generator, they're mapped first through an intermediate network whose output W is split across multiple layers in the generator, modulating features at each level. You can copy parts of one W into another. drive.google.com/file/d/1v-H…
Like most generative models, it's not reversible, so you can't project real images into it. Probably for the best, as we're not ready for that yet.
Replying to @genekogan
These look amazing...
Replying to @genekogan
That's incredible!
Nice juxtaposition on my timeline.
This tweet is unavailable
yeah "style transfer" is pretty generic term although StyleGAN seems a bit more general than that, since you could use it just as well to modulate more the "content" rather than "style" by focusing on the latents at the earlier smaller layers (4x4, 8x8, etc)