Authors
Jie Yang, Thar Baker, Sukhpal Singh Gill, Xiaochuan Yang, Weifeng Han, Yuanzhang Li
Publication date
2024/7
Journal
Software: Practice and Experience
Volume
54
Issue
7
Pages
1257-1274
Description
Federated learning (FL) is widely used in edge‐cloud collaborative training due to its distributed architecture and privacy‐preserving properties without sharing local data. FLTrust, the most state‐of‐the‐art FL defense method, is a federated learning defense system with trust guidance. However, we found that FLTrust is not very robust. Therefore, in the edge collaboration scenario, we mainly study the poisoning attack on the FLTrust defense system. Due to the aggregation rule, FLTrust, with trust guidance, the model updates of participants with a significant deviation from the root gradient direction will be eliminated, which makes the poisoning effect on the global model not obvious. To solve this problem, under the premise of not being deleted by the FLTrust aggregation rules, we construct malicious model updates that deviate from the trust gradient to the greatest extent to achieve model poisoning attacks. First, we …
Total citations
2023202434
Scholar articles
J Yang, T Baker, SS Gill, X Yang, W Han, Y Li - Software: Practice and Experience, 2024