“A Rotation and a Translation Suffice: Fooling CNNs With Simple Transformations”, 2024-04-21 ():
We show that CNNs are not robust to simple rotations and translation and explore methods of improving this.
We show that simple spatial transformations, namely translations and rotations alone, suffice to fool neural networks on a fraction of their inputs in multiple image classification tasks.
Our results are in sharp contrast to previous work in adversarial robustness that relied on more complicated optimization approaches unlikely to appear outside a truly adversarial context. Moreover, the misclassifying rotations and translations are easy to find and require only a few black-box queries to the target model.
Overall, our findings emphasize the need to design robust classifiers even for natural input transformations in benign settings.
[Keywords: robustness, spatial transformations, invariance, rotations, data augmentation, robust optimization]