FedRecover: Recovering from Poisoning Attacks in Federated Learning using Historical Information

Xiaoyu Cao, Jinyuan Jia, Zaixi Zhang, Neil Zhenqiang Gong

Research output: Chapter in Book/Report/Conference proceedingConference contribution

62 Scopus citations

Abstract

Federated learning is vulnerable to poisoning attacks in which malicious clients poison the global model via sending malicious model updates to the server. Existing defenses focus on preventing a small number of malicious clients from poisoning the global model via robust federated learning methods and detecting malicious clients when there are a large number of them. However, it is still an open challenge how to recover the global model from poisoning attacks after the malicious clients are detected. A naive solution is to remove the detected malicious clients and train a new global model from scratch using the remaining clients. However, such train-from-scratch recovery method incurs a large computation and communication cost, which may be intolerable for resource-constrained clients such as smartphones and IoT devices.In this work, we propose FedRecover, a method that can recover an accurate global model from poisoning attacks with a small computation and communication cost for the clients. Our key idea is that the server estimates the clients' model updates instead of asking the clients to compute and communicate them during the recovery process. In particular, the server stores the historical information, including the global models and clients' model updates in each round, when training the poisoned global model before the malicious clients are detected. During the recovery process, the server estimates a client's model update in each round using its stored historical information. Moreover, we further optimize FedRecover to recover a more accurate global model using warm-up, periodic correction, abnormality fixing, and final tuning strategies, in which the server asks the clients to compute and communicate their exact model updates. Theoretically, we show that the global model recovered by FedRecover is close to or the same as that recovered by train-from-scratch under some assumptions. Empirically, our evaluation on four datasets, three federated learning methods, as well as untargeted and targeted poisoning attacks (e.g., backdoor attacks) shows that FedRecover is both accurate and efficient.

Original languageEnglish (US)
Title of host publicationProceedings - 44th IEEE Symposium on Security and Privacy, SP 2023
PublisherInstitute of Electrical and Electronics Engineers Inc.
Pages1366-1383
Number of pages18
ISBN (Electronic)9781665493369
DOIs
StatePublished - 2023
Event44th IEEE Symposium on Security and Privacy, SP 2023 - Hybrid, San Francisco, United States
Duration: May 22 2023May 25 2023

Publication series

NameProceedings - IEEE Symposium on Security and Privacy
Volume2023-May
ISSN (Print)1081-6011

Conference

Conference44th IEEE Symposium on Security and Privacy, SP 2023
Country/TerritoryUnited States
CityHybrid, San Francisco
Period5/22/235/25/23

All Science Journal Classification (ASJC) codes

  • Safety, Risk, Reliability and Quality
  • Software
  • Computer Networks and Communications

Fingerprint

Dive into the research topics of 'FedRecover: Recovering from Poisoning Attacks in Federated Learning using Historical Information'. Together they form a unique fingerprint.

Cite this