âAn Equivalence Between Data Poisoning and Byzantine Gradient Attacksâ, 2022-02-17 ()â :
To study the resilience of distributed learning, the âByzantineâ literature considers a strong threat model where workers can report arbitrary gradients to the parameter server. Whereas this model helped obtain several fundamental results, it has sometimes been considered unrealistic, when the workers are mostly trustworthy machines.
In this paper, we show a surprising equivalence between this model and data poisoning, a threat considered much more realistic. More specifically, we prove that every gradient attack can be reduced to data poisoning, in any personalized federated learning system with PAC guarantees (which we show are both desirable and realistic).
This equivalence makes it possible to obtain new impossibility results on the resilience of any ârobustâ learning algorithm to data poisoning in highly heterogeneous applications, as corollaries of existing impossibility theorems on Byzantine machine learning. Moreover, using our equivalence, we derive a practical attack that we show (theoretically and empirically) can be very effective against classical personalized federated learning models.
âŠWe prove in this paper a somewhat surprising equivalence between gradient attacks and data poisoning, in a convex setting. Essentially, we give the first practically compelling argument for the necessity to protect learning against gradient attacks. Our result enables us to carry over results on Byzantine gradient attacks to the data poisoning world. For instance, the impossibility result of El- et al 2021a, combined with our equivalence result, implies that the more heterogeneous the data, the more vulnerable any ârobustâ learning algorithm is. Also, we derive concrete data poisoning attacks from gradient ones.