Description of 3 anime datasets for machine learning based on Danbooru: cropped anime faces, whole-single-character crops, and hand crops (with hand detection model).
We create & release PALM: the PALM Anime Locator Model. PALM is a pretrained anime hand detector/localization neural network, and 3 sets of accompanying anime hand datasets:
A dataset of 5,382 anime-style Danbooru2019 images annotated with the locations of 14,394 hands.
This labeled dataset is used to train a YOLOv3 model to detect hands in anime.
A second dataset of 96,534 hands cropped from the Danbooru2019 SFW dataset using the PALM YOLO model.
A cleaned version of #2, consisting of 58,536 hand crops upscaled to ≥512px.
Hand detection can be used to clean images (eg. remove face images with any hands in the way), or to generate datasets of just hands (as a form of data augmentation for GANs), to generate reference datasets for artists, or for other purposes. (For human hands, see the “11K Hands” dataset.)