Authors
Cong Xie, Oluwasanmi Koyejo, Indranil Gupta
Publication date
2020/8/6
Conference
Uncertainty in Artificial Intelligence
Pages
261-270
Publisher
PMLR
Description
Recently, new defense techniques have been developed to tolerate Byzantine failures for distributed machine learning. The Byzantine model captures workers that behave arbitrarily, including malicious and compromised workers. In this paper, we break two prevailing Byzantine-tolerant techniques. Specifically we show that two robust aggregation methods for synchronous SGD–namely, coordinate-wise median and Krum–can be broken using new attack strategies based on inner product manipulation. We prove our results theoretically, as well as show empirical validation.
Total citations
20192020202120222023202472228487267
Scholar articles