Preserving Privacy and Security in Federated Learning

6Citations
Citations of this article
30Readers
Mendeley users who have this article in their library.
Get full text

Abstract

Federated learning is known to be vulnerable to both security and privacy issues. Existing research has focused either on preventing poisoning attacks from users or on concealing the local model updates from the server, but not both. However, integrating these two lines of research remains a crucial challenge since they often conflict with one another with respect to the threat model. In this work, we develop a principle framework that offers both privacy guarantees for users and detection against poisoning attacks from them. With a new threat model that includes both an honest-but-curious server and malicious users, we first propose a secure aggregation protocol using homomorphic encryption for the server to combine local model updates in a private manner. Then, a zero-knowledge proof protocol is leveraged to shift the task of detecting attacks in the local models from the server to the users. The key observation here is that the server no longer needs access to the local models for attack detection. Therefore, our framework enables the central server to identify poisoned model updates without violating the privacy guarantees of secure aggregation.

Cite

CITATION STYLE

APA

Nguyen, T., & Thai, M. T. (2024). Preserving Privacy and Security in Federated Learning. IEEE/ACM Transactions on Networking, 32(1), 833–843. https://doi.org/10.1109/TNET.2023.3302016

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free