Bibliography:

  1. A Style-Based Generator Architecture for Generative Adversarial Networks

  2. StyleGAN—Official TensorFlow Implementation

  3. Anime Crop Datasets: Faces, Figures, & Hands § Danbooru2019 Portraits

  4. Danbooru2018 Is a Large-Scale Anime Image Database With 3.3m+ Images Annotated With 92.7m+ Tags; It Can Be Useful for Machine Learning Purposes such as Image Recognition and Generation.

    [Transclude the forward-link's context]

  5. ThisWaifuDoesNotExist.net

  6. This Waifu Does Not Exist

  7. This Anime Does Not Exist.ai (TADNE)

  8. Artbreeder

  9. Making Anime With BigGAN

  10. Large Scale GAN Training for High Fidelity Natural Image Synthesis

  11. Pony Diffusion V6 XL

  12. https://nijijourney.com/en/

  13. Anime Image Generation

  14. https://huggingface.co/hakurei/waifu-diffusion

  15. https://www.reddit.com/r/NovelAi/comments/xu8xpg/novelai_image_generation_launch_announcement/

  16. Waifu Labs

  17. https://crypko.ai/

  18. Generative Adversarial Networks

  19. Improved Techniques for Training GANs

  20. CelebA Dataset

  21. RNN Metadata for Mimicking Author Style

  22. Soumith/dcgan.torch: A Torch Implementation of Https://arxiv.org/abs/1511.06434

  23. Unsupervised Representation Learning with Deep Convolutional Generative Adversarial Networks

  24. Danbooru2020 Is a Large-Scale Anime Image Database With 4.2m+ Images Annotated With 130m+ Tags; It Can Be Useful for Machine Learning Purposes such as Image Recognition and Generation.

    [Transclude the forward-link's context]

  25. A List of All Named GANs!

  26. https://x.com/gwern/status/828311639472611328

  27. https://x.com/gwern/status/828718629181075466

  28. StackGAN: Text to Photo-realistic Image Synthesis with Stacked Generative Adversarial Networks

  29. StackGAN++: Realistic Image Synthesis with Stacked Generative Adversarial Networks

  30. Auto-Regressive Generative Models (PixelRNN, PixelCNN++)

  31. Stabilizing Generative Adversarial Networks: A Survey

  32. Anyone Reproduced the Celeba-HQ Results in the Paper

  33. Synthesizing Programs for Images using Reinforced Adversarial Learning

  34. CAN: Creative Adversarial Networks, Generating "Art" by Learning About Styles and Deviating from Style Norms

  35. Style2Paints GitHub repository

  36. IllustrationGAN: A Simple, Clean TensorFlow Implementation of Generative Adversarial Networks With a Focus on Modeling Illustrations.

  37. MakeGirlsMoe - Create Anime Characters With A.I.!

  38. Towards the Automatic Anime Characters Creation with Generative Adversarial Networks

  39. Illustration2Vec: a semantic vector representation of illustrations

  40. https://www.reddit.com/r/MachineLearning/comments/akbc11/p_tag_estimation_for_animestyle_girl_image/

  41. Minibatch Discrimination

  42. NoGAN: Decrappification, DeOldification, and Super Resolution

  43. DINO: Emerging Properties in Self-Supervised Vision Transformers

  44. GauGAN Turns Doodles into Stunning, Realistic Landscapes

  45. Semantic Image Synthesis with Spatially-Adaptive Normalization

  46. NVlabs/SPADE: Semantic Image Synthesis With SPADE

  47. Heterochromia

  48. Progressive Growing of GANs for Improved Quality, Stability, and Variation

  49. Progressive Growing of GANs for Improved Quality, Stability, and Variation

  50. ProGAN: Progressive Growing of GANs for Improved Quality, Stability, and Variation [Video]

  51. Improved Precision and Recall Metric for Assessing Generative Models

  52. https://x.com/_Ryobot/status/1095619589495353346

  53. https://x.com/ak92501

  54. https://x.com/_Ryobot

  55. One Limitation of StyleGAN Is That It Generates a ‘Pyramid’ of Images. The First Layer Makes a 4×4 Image, Which Is Upscaled and Passed through the next Layer (8×8), and so On, Until out Pops the Final 1,024×1,024. by the Time You Reach 32×32, the Overall Structure of the Object Is Established (is This a Face? Is It a Dog?) yet Only the First 4 Layers of the Model Were Allowed to Contribute to That Decision! For a 1,024×1,024 Model, That Means 6 out of 10 Layers of Weights Are Irrelevant.

  56. A Style-Based Generator Architecture for Generative Adversarial Networks [Video]

  57. [StyleGAN] A Style-Based Generator Architecture for GANs, Part 1 (algorithm Review)

  58. [StyleGAN] A Style-Based Generator Architecture for GANs, Part2 (results and Discussion)

  59. Styleganportraits.ipynb at Master

  60. GenForce: an Efficient PyTorch Library for Deep Generative Modeling (StyleGANv1v2, PGGAN, Etc)

  61. StyleGAN Made With Keras

  62. https://yippy.ai/skymind

  63. https://www.lyrn.ai/2018/12/26/a-style-based-generator-architecture-for-generative-adversarial-networks/

  64. What Makes a Good Image Generator AI?

  65. On Self Modulation for Generative Adversarial Networks

  66. A Neural Algorithm of Artistic Style

  67. AdaIN: Arbitrary Style Transfer in Real-time with Adaptive Instance Normalization

  68. NVlabs/ffhq-Dataset: Flickr-Faces-HQ Dataset (FFHQ)

  69. https://github.com/FeepingCreature

  70. Interpretation of Discriminator Loss

  71. The relativistic discriminator: a key element missing from standard GAN

  72. Variational Discriminator Bottleneck: Improving Imitation Learning, Inverse RL, and GANs by Constraining Information Flow

  73. https://x.com/davidstap/status/1120667403837423616

  74. Appendix E: Choosing Latent Spaces

  75. idea#better-initializations

    [Transclude the forward-link's context]

  76. Spectral Norm Regularization for Improving the Generalizability of Deep Learning

  77. Spectral Normalization for Generative Adversarial Networks

  78. Figure 16: (a) A Typical Architectural Layout for BigGAN-Deep's _G_

  79. CS231n Convolutional Neural Networks for Visual Recognition

  80. A Technical Report on Convolution Arithmetic in the Context of Deep Learning

  81. Convolution Visualizer

  82. This Person Does Not Exist

  83. Which Face Is Real?

  84. https://blurrd.ai/realorfake/

  85. Judge Fake People

  86. StyleGAN Generates Instagram Portraits AI

  87. https://thesecatsdonotexist.com/

  88. https://thiscatdoesnotexist.com/

  89. GANcats

  90. https://x.com/genekogan/status/1093180351437029376

  91. https://x.com/MichaelFriese10/status/1151236302559305728

  92. https://thisrentaldoesnotexist.com/

  93. https://x.com/crschmidt/status/1099562911960350720

  94. https://x.com/xsteenbrugge/status/1096820308164661248

  95. https://x.com/crschmidt/status/1097200249779769344

  96. https://x.com/refikanadol/status/1106798493299949568

  97. https://x.com/roadrunning01/status/1109488507591028740

  98. https://x.com/erikswahn/status/1123951017148788738

  99. 这是一个用StyleGAN训练出的动漫脸生成器

  100. https://x.com/highqualitysh1t/status/1095699293011435520

  101. https://x.com/knjcode/status/1102771002222637056

  102. https://x.com/kikko_fr/status/1094685986691399681

  103. https://imgur.com/a/8nkMmeB

  104. https://x.com/roadrunning01/status/1111686125431783424

  105. https://x.com/MichaelFriese10/status/1127614400750346240

  106. T04glovern/stylegan-Pokemon: Generating Pokemon Cards Using a Mixture of StyleGAN and RNN to Create Beautiful & Vibrant Cards Ready for Battle!

  107. Go Wash Your Hands, Pokemon Generated by Neural Network

  108. Here's a Link to My Colab If You'D like to Give It a Go Yourself. This Codebase Builds off of Previous Work from Many People including @advadnoun @RiversHaveWings @NerdyRodent as well as ClipDraw from @kvfrans @crosslabstokyo @err_more and @okw

  109. CLIP: Connecting Text and Images: We’re introducing a neural network called CLIP which efficiently learns visual concepts from natural language supervision. CLIP can be applied to any visual classification benchmark by simply providing the names of the visual categories to be recognized, similar to the ‘zero-shot’ capabilities of GPT-2 and GPT-3

  110. https://x.com/kikko_fr/status/1095603397179396098

  111. A Machine Learning Font

  112. https://towardsdatascience.com/creating-new-scripts-with-stylegan-c16473a50fd0

  113. https://x.com/kintopp/status/1218795800400101376

  114. https://x.com/zaidalyafeai/status/1346841324461416458

  115. https://x.com/PINguAR/status/1097130957163937792

  116. https://x.com/drose101/status/1108104217577832449

  117. https://x.com/mattjarviswall/status/1110548997729452035

  118. Conditional Implementation for NVIDIA's StyleGAN Architecture

  119. [Seizure Warning] Doom Textures through StyleGAN

  120. Someone Used a Neural Network to Draw Doom Guy in High-Res: A Series of Algorithms Turned the Famous Pixelated Face into an HD Portrait

  121. https://www.reddit.com/r/computervision/comments/bfcnbj/p_stylegan_on_oxford_visual_geometry_group/

  122. This President Does Not Exist: Generating Artistic Portraits of Donald Trump Using StyleGAN Transfer Learning: Theory and Implementation in Tensorflow

  123. I Have No Mana And I Must Tap

  124. https://x.com/ionicdevil/status/1122756808991330304

  125. Eastside Hockey Manager Faces, Colin R. Small

  126. https://www.reddit.com/r/MachineLearning/comments/bkrn3i/p_stylegan_trained_on_album_covers/

  127. Tired of Books Written by Authors? Try Books Written by AI

  128. https://web.archive.org/web/20230604002332/https://thiseyedoesnotexist.com/story/

  129. Curated Output from a StyleGAN 2 Model Trained on Images That Trigger Pareidolia in the Viewer—Scraped from the `#iseefaces` and `#pareidolia` Hashtags on Instagram.

  130. https://x.com/MichaelFriese10/status/1130604229372997632

  131. https://x.com/MichaelFriese10/status/1132777932802236417

  132. This Vessel Does Not Exist.

  133. WatchGAN: Advancing Generated Watch Images With StyleGANs

  134. Generating New Watch Designs With StyleGAN

  135. This T-Shirt Does Not Exist

  136. I Trained a StyleGAN on Images of Butterflies from the Natural History Museum in London.

  137. StyleGAN for Evil: Trypophobia and Clockwork Oranging

  138. 2020-05-05-tjukanov-mapdreameraicartography.html

  139. End-to-End Chinese Landscape Painting Creation Using Generative Adversarial Networks

  140. Image Data Quilts: Our New Website

  141. Are GANs Created Equal? A Large-Scale Study

  142. ImageNet-trained CNNs are biased towards texture; increasing shape bias improves accuracy and robustness

  143. Approximating CNNs with Bag-of-local-Features models works surprisingly well on ImageNet

  144. Robustness properties of Facebook’s ResNeXt WSL models

  145. The Origins and Prevalence of Texture Bias in Convolutional Neural Networks

  146. Large Scale Adversarial Representation Learning

  147. Naoto0804/pytorch-AdaIN: Unofficial Pytorch Implementation of ‘Arbitrary Style Transfer in Real-Time With Adaptive Instance Normalization’ [Huang+, ICCV2017]

  148. E Unibus Pluram: Television and U.S. Fiction

  149. https://openaccess.thecvf.com/content_ICCV_2017/papers/Zhang_StackGAN_Text_to_ICCV_2017_paper.pdf#page=7

  150. https://arxiv.org/pdf/1809.11096.pdf#page=14

  151. https://arxiv.org/pdf/2105.05233.pdf#page=20

  152. https://x.com/kashhill/status/1218542846694871040

  153. GAN Dissection: Visualizing and Understanding Generative Adversarial Networks [Blog]

  154. Spatially Controllable Image Synthesis with Internal Representation Collaging

  155. LARGE: Latent-Based Regression through GAN Semantics

  156. Generative Models: What do they know? Do they know things? Let’s find out!

  157. Rewriting a Deep Generative Model

  158. Unsupervised Discovery of Interpretable Directions in the GAN Latent Space

  159. Object Segmentation Without Labels with Large-Scale Generative Models

  160. Repurposing GANs for One-shot Semantic Part Segmentation

  161. Labels4Free: Unsupervised Segmentation using StyleGAN

  162. DatasetGAN: Efficient Labeled Data Factory with Minimal Human Effort

  163. BigDatasetGAN: Synthesizing ImageNet with Pixel-wise Annotations

  164. Beyond Surface Statistics: Scene Representations in a Latent Diffusion Model

  165. Image2StyleGAN: How to Embed Images Into the StyleGAN Latent Space?

  166. Generative Adversarial Imitation Learning

  167. Enhanced SRGAN. Champion PIRM Challenge on Perceptual Super-Resolution

  168. ESRGAN: Enhanced Super-Resolution Generative Adversarial Networks

  169. Joeyballentine/ESRGAN: A Modified Version of the Original ESRGAN Test.py Script With Added Features

  170. CC BY-NC 4.0 Deed Attribution-NonCommercial 4.0 International

  171. Nvidia Source Code License

  172. https://x.com/AydaoGMan/status/1269690778324013061

  173. Ffhq-512-Avg-Tpurun1.pkl (348MB)

  174. Comment Regarding Request for Comments on Intellectual Property Protection for Artificial Intelligence Innovation

  175. https://www.copyright.gov/comp3/chap300/ch300-copyrightable-authorship.pdf#Compendium%20300.indd%3A.122046%3A96431

  176. 萌えキャラ生成AI、学習データを‘ネットの海’からゲッチュするのはアリか? (1/5)

  177. https://scholarship.law.duke.edu/cgi/viewcontent.cgi?article=1023&context=dltr#pdf

  178. https://www.rutgerslawreview.com/wp-content/uploads/2017/07/Robert-Denicola-Ex-Machina-69-Rutgers-UL-Rev-251-2016.pdf

  179. https://files.osf.io/v1/resources/np2jd/providers/osfstorage/59614dec594d9002288271b6?action=download&version=1&direct#pdf

  180. https://journal.atp.art/the-next-rembrandt-who-holds-the-copyright-in-computer-generated-art/

  181. The Machine As Author

  182. Why Is AI Art Copyright So Complicated?

  183. We’ve Been Warned about AI and Music for over 50 Years, but No One’s Prepared

  184. https://creativecommons.org/public-domain/cc0/

  185. LSUN Dataset Documentation and Demo Code

  186. LSUN: Construction of a Large-scale Image Dataset using Deep Learning with Humans in the Loop

  187. https://x.com/syoyo/status/1093526177891770369

  188. $2019

  189. Amazon EC2 - P2 Instances

  190. Rent GPUs

  191. 2019-03-16-gwern-stylegan-facestraining.mp4

  192. Lazy, a tool for running things in idle time

  193. danbooru2021#download

    [Transclude the forward-link's context]

  194. Nagadomi/lbpcascade_animeface: A Face Detector for Anime/manga Using OpenCV

  195. Nagadomi/waifu2x: Image Super-Resolution for Anime-Style Art

  196. Utility for Working With Danbooru2018 Dataset

  197. Provide Demonstration Script for Producing Images Cropped to the Face

  198. Nagadomi/animeface-2009: Face and Landmark Detector for Anime/Manga. This Is 2009s Version of Imager::AnimeFace, but It Works on Recent System.

  199. Animeface-2009/animeface-Ruby/face_collector.rb at Master

  200. Now Anyone Can Train Imagenet in 18 Minutes

  201. $2018

  202. Reimplementation of Https://arxiv.org/abs/1812.04948

  203. Animating GAnime With StyleGAN: Part 1—Introducing a Tool for Interacting With Generative Models

  204. BigGAN: Large Scale GAN Training for High Fidelity Natural Image Synthesis § 4.2 Characterizing Instability: The Discriminator

  205. Deep reinforcement learning from human preferences

  206. Adversarial Examples Are Not Bugs, They Are Features

  207. Image Augmentations for GAN Training

  208. On Data Augmentation for GAN Training

  209. StyleGAN2-ADA: Training Generative Adversarial Networks with Limited Data

  210. Differentiable Augmentation for Data-Efficient GAN Training

  211. Here We Analyze the Performance of BigGAN [2] With Different Amounts of Data on CIFAR-10. As Plotted in Figure 1, Even given 100% Data, the Gap between the Discriminator’s Training and Validation Accuracy Keeps Increasing, Suggesting That the Discriminator Is Simply Memorizing the Training Images...Figure 6 Analyzes That Stronger DiffAugment Policies Generally Maintain a Higher Discriminator’s Validation Accuracy at the Cost of a Lower Training Accuracy, Alleviate the Overfitting Problem, and Eventually Achieve Better Convergence.

  212. Figure 1a Shows Our Baseline Results for Different Subsets of FFHQ. Training Starts the Same Way in Each Case, but Eventually the Progress Stops and FID Starts to Rise. The Less Training Data There Is, the Earlier This Happens. Figure 1b, Figure 1c Shows the Discriminator Output Distributions for Real and Generated Images during Training. The Distributions Overlap Initially but Keep Drifting Apart As the Discriminator Becomes More and More Confident, and the Point Where FID Starts to Deteriorate Is Consistent With the Loss of Sufficient Overlap between Distributions. This Is a Strong Indication of Overfitting, Evidenced Further by the Drop in Accuracy Measured for a Separate Validation Set.

  213. BigGAN: Large Scale GAN Training For High Fidelity Natural Image Synthesis § 5.2 Additional Evaluation On JFT-300M

  214. Do GANs learn the distribution? Some Theory and Empirics

  215. Minibatch Discrimination

  216. KNN-Diffusion: Image Generation via Large-Scale Retrieval

  217. Retrieval-Augmented Diffusion Models: Semi-Parametric Neural Image Synthesis

  218. Novelty Nets: Classifier Anti-Guidance

  219. Styleganime2/misc/ranker.py at Master · Xunings/styleganime2

  220. Discriminator Rejection Sampling

  221. Advanced Machine Learning

  222. This Fursona Does Not Exist

  223. GPT-3 Creative Fiction § Prompts As Programming

  224. Resizing or Scaling—IM V6 Examples

  225. CUDA Toolkit 12.5 Downloads

  226. Install TensorFlow 2

  227. https://colab.research.google.com/notebooks/welcome.ipynb

  228. stylegan/training/training_loop.py

  229. stylegan/train.py at Master · NVlabs

  230. stylegan/train.py at Master

  231. TensorBoard: Visualizing Learning

  232. stylegan/training/training_loop.py

  233. Pastebin

  234. stylegan/train.py at Master

  235. Removing Blob Artifact from StyleGAN Generations without Retraining. Inspired by StyleGAN-2

  236. 2019-03-08-Stylegan-Animefaces-Network-02051-021980.pkl.xz

  237. https://arxiv.org/pdf/1809.11096.pdf#page=4

  238. 3.1. Style Mixing

  239. Megapixel Size Image Creation using Generative Adversarial Networks

  240. stylegan/pretrained_example.py at Master

  241. generate_figures.py at Master · NVlabs/stylegan

  242. https://x.com/cyrildiagne

  243. https://colab.research.google.com/gist/kikko/d48c1871206fc325fa6f7372cf58db87/stylegan-experiments.ipynb

  244. https://x.com/halcy/status/1098223180454477824

  245. Waifu Synthesis: Real Time Generative Anime

  246. GPT-2 Neural Network Poetry

  247. Magenta

  248. 2019-02-14-Stylegan-Faces-02021-010483.tar

  249. 2019-02-26-Stylegan-Faces-Network-02048-016041.pkl

  250. twdne#downloads

    [Transclude the forward-link's context]

  251. https://x.com/SkyLi0n

  252. https://x.com/arfafax/status/1348052573106757636

  253. doc2vec: Distributed Representations of Sentences and Documents

  254. StackGAN: §3.2. Conditioning Augmentation

  255. Conditional Image Generation and Manipulation for User-Specified Content § Pg3

  256. Improved Consistency Regularization for GANs § 2.1 Balanced Consistency Regularization (bCR)

  257. https://cdn.openai.com/papers/Learning_Transferable_Visual_Models_From_Natural_Language_Supervision.pdf#page=4

  258. Contrastive Representation Learning: A Framework and Review

  259. https://colab.research.google.com/drive/1WLU1dIWJ4YeNlMk3Jz9q-1dhLfL23-r-

  260. Cartoon Set

  261. Tag-Based Anime Generation: This Model Uses Doc2vec Embeddings of Danbooru Tags, Combined With a Conditional StyleGAN2 Model, to Generate Anime Characters Based on Tag Inputs.

  262. StyleGAN2_experiments/Preprocess Danbooru Vectors

  263. StyleGAN-2 512px Trained on Danbooru2019

  264. https://x.com/aydaoai

  265. Making Anime With BigGAN § Danbooru2019+e621 256px BigGAN

  266. This Anime Does Not Exist [Blog]

  267. https://x.com/nearcyan

  268. 2021-01-19-gwern-stylegan2ext-danbooru2019-3x10montage-1.png

  269. 2021-01-19-gwern-stylegan2ext-danbooru2019-3x10montage-2.png

  270. 2021-01-19-gwern-stylegan2ext-danbooru2019-3x10montage-3.png

  271. tadne-l4rz-kmeans-k256-n120k-centroidsamples.jpg

  272. Here Are 120K 𝑤 Samples from @AydaoAI’s Large Anime Model (aka TADNE) Clustered into a Set of 256 Centroids. 𝘸𝘢𝘵𝘤𝘩 𝘪𝘵 𝘴𝘩𝘪𝘯𝘦

  273. Aydao/stylegan2-Surgery

  274. https://colab.research.google.com/drive/1gbqukfE5f4yYOuHWFW-85zuXW8JtWS09

  275. convert_weight.py at Tadne

  276. This Anime Does Not Exist—Interpolation Videos: This Notebook Generates Interpolation Videos from the Model Used for Https://thisanimedoesnotexist.ai by @aydao

  277. https://colab.research.google.com/drive/1QzttnjpQiVHJ8bnhEP0JaSwBX62V1ieG

  278. Scoring images from TADNE with CLIP

  279. This Is Great! Now That the Model Can Be Used in PyTorch, I'Ve Starting Playing With @AydaoAI's Anime StyleGAN Directly Guided by CLIP. Starting Slow by Searching for Asuka by Name in the Latent Space.

  280. StyleGAN Anime Sliders: This Notebook Demonstrate How to Learn and Extract Controllable Directions from ThisAnimeDoesNotExist. This Takes a Pretrained StyleGAN and Uses DeepDanbooru to Extract Various Labels from a Number of Samples. It Then Uses Those Labels to Learn Various Attributes Which Are Controllable With Sliders

  281. https://arxiv.org/pdf/1812.04948.pdf#page=6

  282. https://arxiv.org/pdf/1912.04958.pdf#page=5

  283. Controlled GAN-Based Creature Synthesis via a Challenging Game Art Dataset—Addressing the Noise-Latent Trade-Off

  284. 4.1. Simplified Gradient Penalties

  285. Stabilizing Training of Generative Adversarial Networks through Regularization

  286. Update: the XXXL Model (250M Parameters, Doubled Latent Size)

  287. Progressive Growing of GANs for Improved Quality, Stability, and Variation: 3. Increasing Variation Using Minibatch Standard Deviation

  288. TensorFlow Research Cloud (TRC): Accelerate your cutting-edge machine learning research with free Cloud TPUs

  289. Danbooru2019 Is a Large-Scale Anime Image Database With 3.69m+ Images Annotated With 108m+ Tags; It Can Be Useful for Machine Learning Purposes such as Image Recognition and Generation.

    [Transclude the forward-link's context]

  290. crop#figure

    [Transclude the forward-link's context]

  291. Anime Crop Datasets: Faces, Figures, & Hands § Hands

  292. Top-K Training of GANs: Improving GAN Performance by Throwing Away Bad Samples

  293. Jukebox: We’re introducing Jukebox, a neural net that generates music, including rudimentary singing, as raw audio in a variety of genres and artist styles. We’re releasing the model weights and code, along with a tool to explore the generated samples.

  294. VQ-GAN: Taming Transformers for High-Resolution Image Synthesis

  295. not-so-BigGAN: Generating High-Fidelity Images on Small Compute with Wavelet-based Super-Resolution

  296. DALL·E 1: Creating Images from Text: We’ve trained a neural network called DALL·E that creates images from text captions for a wide range of concepts expressible in natural language

  297. BigGAN: Non-Normal Latent Space (binomial Mixture?)

  298. Scaling up StyleGAN-2

  299. ‘diffusion model’ tag

  300. Some Heavily Cherrypicked Samples from Transfer Learning Using @AydaoAI’s Enhanced StyleGAN-2 Anime Model After 2 Days.

  301. Aydao-Anime-Danbooru2019s-512-5268480.pkl

  302. https://drive.google.com/file/d/1qNhyusI0hwBLI-HOavkNP5I0J0-kcN4C/view

  303. https://drive.google.com/file/d/1A-E_E32WAtTHRlOzjhhYhyyBDXLJN9_H/view

  304. This Anime Does Not Exist

  305. Some AI Koans § Http://www.catb.org/esr/jargon/html/koans.html#id3141241

  306. How I Learned to Stop Worrying and Love Transfer Learning

  307. Image2StyleGAN: How to Embed Images Into the StyleGAN Latent Space? § Pg2

  308. 2019-02-10-stylegan-holo-handselectedsamples.zip

  309. Holo Cropped Face Collection

  310. https://www.reddit.com/r/SpiceandWolf/comments/apazs0/my_holo_face_collection/

  311. https://www.reddit.com/r/SpiceandWolf/comments/apbz6r/all_those_cropped_holo_faces_uprimarypizza_posted/

  312. 2019-02-10-Stylegan-Holofaces-Networksnapshot-00015-011370.pkl

  313. 2019-02-13-Stylegan-Asuka-Psi1.2.tar

  314. 2019-02-11-stylegan-asuka-handselectedsamples.zip

  315. Asukas for a Rainy Monday

  316. https://www.reddit.com/r/evangelion/comments/apmkjm/brighten_your_monday_with_some_asukas_album_of_130/

  317. https://mega.nz/#!0JVxHQCD!C7ijBpRWNpcL_gubWFR-GTBDJTW1jXI6ThzSxwaw2aE

  318. https://www.reddit.com/r/MachineLearning/comments/apq4xu/p_stylegan_on_anime_faces/egf8pvt/

  319. Zuihou KanColle Wiki

  320. Akizuki KanColle Wiki

  321. https://x.com/Gansodeva/status/1122361947410849792

  322. Arknights

  323. https://www.reddit.com/r/MachineLearning/comments/apq4xu/p_stylegan_on_anime_faces/egmyf60/

  324. FGO StyleGAN: This Heroic Spirit Doesn’t Exist

  325. https://x.com/roadrunning01/status/1097513035474845696

  326. https://x.com/FlatIsNice/status/1112671357706424322

  327. Asashio KanColle Wiki

  328. This Asashio Does Not Exist

  329. https://x.com/__meimiya__/status/1102679068242173952

  330. https://x.com/__meimiya__/status/1134441616477806592

  331. https://x.com/__meimiya__/status/1134751068758265856

  332. https://www.reddit.com/r/touhou/comments/gl180j/here_have_a_few_marisa_portraits/

  333. A Few Marisa Portraits

  334. https://x.com/3D_DLW/status/1227313334237745155

  335. 微调StyleGAN2模型

  336. 微调StyleGAN2模型(使用Google Colab)_微调styglegan2

  337. Warship Girls (Video Game)

  338. Played around With @gwern’s TWDNEv2 Model to Generate Images of Hayasaka Ai! This Is After ~9 Hours of Training (n = 300+). Stopped Working on It After a Bit, so a Bunch of Potential Improvements. More Thoughts Here: https://github.com/ZKTKZ/thdne/bl

  339. Hayasaka.ai/StyleGAN2_Tazik_25GB_RAM.ipynb at Master · Taziksh/hayasaka.ai

  340. Stylegan Neural Ahegao

  341. Andy8744 Expert

  342. https://www.kaggle.com/datasets/andy8744/rezero-rem-anime-faces-for-gan-training

  343. https://www.kaggle.com/code/andy8744/predict-anime-face-using-pre-trained-model/data

  344. https://github.com/ultralytics/yolov5/issues/6998#issue-1170533269

  345. Rem

  346. https://www.youtube.com/watch?v=D2zjc--sDaY

  347. https://x.com/lord_yuanyuan

  348. https://www.kaggle.com/code/andy8744/generating-ganyu-from-trained-model/notebook

  349. Ganyu Genshin Impact Wiki

  350. https://x.com/sunkworld/status/1100954144905543680

  351. https://x.com/misaki_cradle

  352. 2019-04-30-Stylegan-Danbooru2018-Portraits-02095-066083.pkl

  353. 1996-sadamoto-howtodrawshinjinadia.jpg

  354. 2019-05-03-Stylegan-Malefaces-02107-069770.pkl

  355. 2019-05-06-stylegan-malefaces-1ksamples.tar

  356. https://mega.nz/#!OEFjWKAS!QIqbb38fR5PnIZbdr7kx5K-koEMtOQ_XQXRqppAyv-k

  357. Danbooru2018 Male Face StyleGAN, 400 Random Samples

  358. https://x.com/Buntworthy/status/1213402237269159936

  359. Ukiyo-e Search

  360. https://x.com/AydaoGMan/status/1217276442230378497

  361. ArtGAN/WikiArt Dataset

  362. GAN Explorations 011: StyleGAN2 + Stochastic Weight Averaging

  363. Averaging Weights Leads to Wider Optima and Better Generalization

  364. StyleGAN Samples

  365. StyleGAN network blending

  366. Toonify: Resolution Dependent GAN Interpolation for Controllable Image Synthesis Between Domains

  367. StyleGAN2 Blending of Humans With Cartoons

  368. ‘Network Blending in StyleGAN: Swapping Layers between Two Models in StyleGAN Gives Some Interesting Results. You Need a Base Model and a Second Model Which Has Been Fine-Tuned from the Base.’, Buntworthy

  369. I Just Tried My StyleGAN Layer Swapping Method the Other Way round to What I’d Been Doing Before. So Making the Ukiyo-E Model Human (rather Than the Other Way Around) and I Love the Results!

  370. Combining My Cross-Model Interpolation With @Buntworthy‘s Layer Swapping Idea. Here the Different Resolution Layers Are Being Interpolated at Different Rates between Furry, FFHQ, and @KitsuneKey’s Foxes. P0 Is 4x4 and 8x8, P1 Is 16x16 to 128x128, and P2 Is 256x256 to 512x512.

  371. Cross-Model Interpolations Are One of Those Neat Hidden Features That Arise from Transfer Learning. Here I‘M Interpolating between 5 StyleGAN2 Models: Furry, FFHQ, Anime, Ponies, and @KitsuneKey’s Fox Model. All Were Trained off the Same Base Model, Which Makes Blending Possible.

  372. Imagined Visage

  373. https://x.com/pbaylies/status/1136307166695108609

  374. Discovering Interpretable GAN Controls

  375. Adversarial Feature Learning

  376. Inverting The Generator Of A Generative Adversarial Network (II)

  377. Reinventing the Wheel: Discovering the Optimal Rolling Shape With PyTorch

  378. Galton Boards Are Fun and All, but What about Asymmetric Galton Board 🎉😇 By Tuning (thanks #autodiff !!) the Probabilities of Going to the Left/right, One Can Pretty Much Obtain Any Desired Final Distribution 😍 #probability #python #jax

  379. Mining gold from implicit models to improve likelihood-free inference

  380. Gradient Theory of Optimal Flight Paths

  381. A Steepest-Ascent Method for Solving Optimum Programming Problems

  382. Deep Set Prediction Networks

  383. Synthesizing the preferred inputs for neurons in neural networks via deep generator networks

  384. Unadversarial Examples: Designing Objects for Robust Vision

  385. Image Synthesis from Yahoo’s open_nsfw

  386. Ambigrammatic Figures: 55 Grotesque Ambigrams

  387. Amplifying The Uncanny § Pg5

  388. Differentiable Image Parameterizations

  389. Style Generator Inversion for Image Enhancement and Animation

  390. Style Generator Inversion for Image Enhancement and Animation

  391. On the "steerability" of generative adversarial networks

  392. Interpreting the Latent Space of GANs for Semantic Face Editing

  393. Deep Danbooru

  394. SummitKwan/transparent_latent_gan: Use Supervised Learning to Illuminate the Latent Space of GAN for Controlled Generation and Edit

  395. https://www.kaggle.com/summitkwan/tl-gan-demo

  396. Generating Custom Photo-Realistic Faces Using AI: Controlled Image Synthesis and Editing Using a Novel (Transparent Latent-Space GAN) TL-GAN Model

  397. StyleGAN Encoder—Converts Real Images to Latent Space

  398. https://www.reddit.com/r/MachineLearning/comments/aq6jxf/p_stylegan_encoder_from_real_images_to_latent/

  399. StyleGAN Encoder—Converts Real Images to Latent Space

  400. https://github.com/Puzer/stylegan-encoder-encoder/blob/master/Play_with_latent_directions.ipynb

  401. https://x.com/halcy

  402. StyleGAN—Official TensorFlow Implementation

  403. https://imgur.com/d8EYyel

  404. https://imgur.com/BLWbiXT

  405. Stylegan-Generate-Encode.ipynb at Master

  406. https://colab.research.google.com/drive/1LiWxqJJMR5dg4BxwUgighaWp2U_enaFd#offline=true&sandboxMode=true

  407. Icosahedron

  408. https://www.reddit.com/r/AnimeResearch/comments/aul582/modification_of_anime_face_stylegan_disentangled/

  409. 2020-snowyhalcy-stylegan-animefaceediting-brightness.png

  410. Interactive Waifu Modification

  411. https://www.youtube.com/watch?v=GRG6czAZql0

  412. StyleGAN—Official TensorFlow Implementation

  413. https://www.reddit.com/r/MediaSynthesis/comments/c6axmr/close_the_world_txen_eht_nepo/

  414. This Anime Does Not Exist [Video]

  415. https://x.com/Artbreeder/status/1182293849181495296

  416. https://x.com/arfafax/status/1263638042889224193

  417. This Fursona Does Not Exist—Fursona Editor (Tensorflow Version)

  418. This Pony Does Not Exist

  419. GANSpace: Discovering Interpretable GAN Controls

  420. https://x.com/realmeatyhuman/status/1255570195319590913

  421. https://colab.research.google.com/drive/1g-ShMzkRWDMHPyjom_p-5kqkn2f-GwBi

  422. This Waifu Does Not Exist § TWDNEv3

  423. StyleGAN-2—Official TensorFlow Implementation

  424. StyleGAN-2-ADA—Official PyTorch Implementation

  425. StyleGAN2

  426. MSG-GAN: Multi-Scale Gradients for Generative Adversarial Networks

  427. 2020-01-11-Skylion-Stylegan2-Animeportraits-Networksnapshot-024664.pkl.xz

  428. https://hivemind-repo.s3-us-west-2.amazonaws.com/twdne3/twdne3.pt

  429. https://hivemind-repo.s3-us-west-2.amazonaws.com/twdne3/twdne3.onnx

  430. Implementation of Analyzing and Improving the Image Quality of StyleGAN (StyleGAN 2) in PyTorch

  431. https://colab.research.google.com/drive/1Pv8OIFlonha4KeYyY2oEFaK4mG-alaWF

  432. https://x.com/layolu/status/1218177246495535104

  433. https://x.com/theshawwn/status/1230022825538248704

  434. StyleGAN-2—Official TensorFlow Implementation

  435. StyleGAN → BigGAN: Import the StyleGAN Large 8x512 FC _z_ → _w_ Embedding Trick

  436. Minibatch Discrimination

  437. EndingCredits/Set-CGAN: Adaptation of Conventional GAN to Condition on Additional Input Set

  438. FIGR: Few-shot Image Generation with Reptile

  439. Few-Shot Unsupervised Image-to-Image Translation

  440. Image Generation From Small Datasets via Batch Statistics Adaptation

  441. YFCC100M: The New Data in Multimedia Research

  442. Evolving Normalization-Activation Layers

  443. https://www.reddit.com/r/MachineLearning/comments/e23ezq/p_using_stylegan_to_make_a_music_visualizer/

  444. Pretrained Anime StyleGAN-2: Convert to Pytorch and Editing Images by Encoder by Allen Ng Pickupp

  445. Video Shows off Hundreds of Beautiful AI-Created Anime Girls in Less Than a Minute

  446. Talking Head Anime from a Single Image

  447. https://podgorskiy.com/static/stylegan/stylegan.html

  448. Unofficial Implementation of StyleGAN's Generator

  449. StyleGAN-2—Official TensorFlow Implementation

  450. https://towardsdatascience.com/stylegan-v2-notes-on-training-and-latent-space-exploration-e51cf96584b3

  451. Practical aspects of StyleGAN2 training

  452. Morphing Anime Girls Quiz

  453. https://amitness.com/posts/google-colab-tips

  454. Deep Generative Modeling: A Comparative Review of VAEs, GANs, Normalizing Flows, Energy-Based and Autoregressive Models

  455. State-Of-The-Art Image Generative Models

  456. Generative Modeling by Estimating Gradients of the Data Distribution

  457. [P] StyleGAN on Anime Faces

  458. Генерация Аниме С Помощью Нейросети StyleGAN