Authors
Tong Xia, Abhirup Ghosh, Cecilia Mascolo
Publication date
2023/12/4
Description
Learning a global model by abstracting the knowledge, distributed across multiple clients, without aggregating the raw data is the primary goal of Federated Learning (FL). Typically, this works in rounds alternating between parallel local training at several clients, followed by model aggregation at a server. We found that existing FL methods under-perform when local datasets are small and present severe label skew as these lead to over-fitting and local model bias. This is a realistic setting in many real-world applications. To address the problem, we propose FLea, a unified framework that tackles over-fitting and local bias by encouraging clients to exchange privacy-protected features to aid local training. The features refer to activations from an intermediate layer of the model, which are obfuscated before being shared with other clients to protect sensitive information in the data. FLea leverages a novel way of combining local and shared features as augmentations to enhance local model learning. Our extensive experiments demonstrate that FLea outperforms the start-of-the-art FL methods, sharing only model parameters, by up to , and also outperforms the FL methods that share data augmentations by up to , while reducing the privacy vulnerability associated with shared data augmentations.
Total citations