“Intriguing Properties of Adversarial Examples”, Ekin Dogus Cubuk, Barret Zoph, Samuel Stern Schoenholz, Quoc V. Le2018-02-15 (; similar)⁠:

Adversarial error has similar power-law form for all datasets and models studied, and architecture matters.

It is becoming increasingly clear that many machine learning classifiers are vulnerable to adversarial examples. In attempting to explain the origin of adversarial examples, previous studies have typically focused on the fact that neural networks operate on high dimensional data, they overfit, or they are too linear. Here we show that distributions of logit differences have an universal functional form. This functional form is independent of architecture, dataset, and training protocol; nor does it change during training. This leads to adversarial error having an universal scaling, as a power-law, with respect to the size of the adversarial perturbation. We show that this universality holds for a broad range of datasets (MNIST, CIFAR-10, ImageNet, and random data), models (including state-of-the-art deep networks, linear models, adversarially trained networks, and networks trained on randomly shuffled labels), and attacks (FGSM, step l.l., PGD). Motivated by these results, we study the effects of reducing prediction entropy on adversarial robustness. Finally, we study the effect of network architectures on adversarial sensitivity. To do this, we use neural architecture search with reinforcement learning to find adversarially robust architectures on CIFAR-10. Our resulting architecture is more robust to white and black box attacks compared to previous attempts.

[Keywords: adversarial examples, universality, neural architecture search]