So after training #stylegan 7 days on our photobooth images here is the mandatory interpolation video. Looks like it handles the transitions from singleface to multiface sequences a bit better the pro-GAN ... Still have to play with stylemixing now!

Feb 13, 2019 · 3:00 PM UTC

It´s in the data. The pictures are from a photo booth in a german museum, and what you see is somehow the average of all visitor that took a selfie there ... So with a different dataset you would get different results
Replying to @highqualitysh1t
Impressive. If only I could afford to spend $6,399.00 for a single Nvidia Tesla v100, I could get similar results after just 41 days and 4 hours of training the model. Nvidia, how about demonstrating a version of StyleGAN that can train on a GTX 1080ti?
You absolutely can use GTX1080TI. Use 512x512 or less and you are good to go. Only for 1024x1024 you need more then 12GB GPU RAM. But you need time for the training. 7-12 days for 512x512
Replying to @highqualitysh1t
Hey, I do not yet have the experience or hardware to reproduce these kind of results. Can you render a longer video for me? One that goes on let's say 20 minutes (10 times as long). I have no idea how long that would take. I will pay for this if I have to.
What do you want to use it for? Why should I do that for you?
Replying to @highqualitysh1t
Really cool! How large was the data set?
Replying to @highqualitysh1t
Schockingly close to plugins of early 2000’s. this is next level. Love it... keep experimenting and if poss - make it even more easier for no dev folk like hands on creative people to get involved without having to spend hours, days to install software. Plss
I am not the developer behind the code used here... Just to make that clear. I used github.com/NVlabs/stylegan which was written by Tero Karras from @nvidia