“On Optimal Learning Under Targeted Data Poisoning”, Steve Hanneke, Amin Karbasi, Mohammad Mahmoody, Idan Mehalel, Shay Moran2022-10-06 ()⁠:

Consider the task of learning a hypothesis class in the presence of an adversary that can replace up to an η fraction of the examples in the training set with arbitrary adversarial examples. The adversary aims to fail the learner on a particular target test point x which is known to the adversary but not to the learner.

In this work we aim to characterize the smallest achievable error ε = ε(η) by the learner in the presence of such an adversary in both realizable and agnostic settings. We fully achieve this in the realizable setting, proving that ε = Θ(VC(ℋ)·η), where VC(ℋ) is the VC dimension of ℋ. Remarkably, we show that the upper bound can be attained by a deterministic learner.

In the agnostic setting we reveal a more elaborate landscape: we devise a deterministic learner with a multiplicative regret guarantee of ε ≤ C · OPT + 𝒪(VC(ℋ) · η), where C > 1 is a universal numerical constant. We complement this by showing that for any deterministic learner there is an attack which worsens its error to at least 2 · OPT. This implies that a multiplicative deterioration in the regret is unavoidable in this case.

Finally, the algorithms we develop for achieving the optimal rates are inherently improper [not guaranteed to converge]. Nevertheless, we show that for a variety of natural concept classes, such as linear classifiers, it is possible to retain the dependence ε = Θ(η) by a proper algorithm in the realizable setting. Here Θ conceals a polynomial dependence on VC(ℋ).