“SimpleShot: Revisiting Nearest-Neighbor Classification for Few-Shot Learning”, 2019-11-12 (; backlinks; similar):
Few-shot learners aim to recognize new object classes based on a small number of labeled training examples. To prevent overfitting, state-of-the-art few-shot learners use meta-learning on convolutional-network features and perform classification using a nearest-neighbor classifier.
This paper studies the accuracy of nearest-neighbor baselines without meta-learning.
Surprisingly, we find simple feature transformations suffice to obtain competitive few-shot learning accuracies. For example, we find that a nearest-neighbor classifier used in combination with mean-subtraction and 𝓁2-normalization [to enable cosine distance] outperforms prior results in 3⁄5 settings on the miniImageNet dataset.