FedLTN: Federated Learning for Sparse and Personalized Lottery Ticket Networks

3Citations
Citations of this article
15Readers
Mendeley users who have this article in their library.
Get full text

Abstract

Federated learning (FL) enables clients to collaboratively train a model, while keeping their local training data decentralized. However, high communication costs, data heterogeneity across clients, and lack of personalization techniques hinder the development of FL. In this paper, we propose FedLTN, a novel approach motivated by the well-known Lottery Ticket Hypothesis to learn sparse and personalized lottery ticket networks (LTNs) for communication-efficient and personalized FL under non-identically and independently distributed (non-IID) data settings. Preserving batch-norm statistics of local clients, postpruning without rewinding, and aggregation of LTNs using server momentum ensures that our approach significantly outperforms existing state-of-the-art solutions. Experiments on CIFAR-10 and Tiny ImageNet datasets show the efficacy of our approach in learning personalized models while significantly reducing communication costs.

Cite

CITATION STYLE

APA

Mugunthan, V., Lin, E., Gokul, V., Lau, C., Kagal, L., & Pieper, S. (2022). FedLTN: Federated Learning for Sparse and Personalized Lottery Ticket Networks. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 13672 LNCS, pp. 69–85). Springer Science and Business Media Deutschland GmbH. https://doi.org/10.1007/978-3-031-19775-8_5

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free