https://invertornot.com/
Canvas API
CLIP: Connecting Text and Images: We’re introducing a neural network called CLIP which efficiently learns visual concepts from natural language supervision. CLIP can be applied to any visual classification benchmark by simply providing the names of the visual categories to be recognized, similar to the ‘zero-shot’ capabilities of GPT-2 and GPT-3
ImageNet Benchmark (Zero-Shot Transfer Image Classification)
Making Anime Faces With StyleGAN § Reversing StyleGAN To Control & Modify Images
LAION-400M: Open Dataset of CLIP-Filtered 400 Million Image-Text Pairs
CutMix: Regularization Strategy to Train Strong Classifiers with Localizable Features
Searching for MobileNetV3