FedCML: Federated Clustering Mutual Learning with non-IID Data

0Citations
Citations of this article
2Readers
Mendeley users who have this article in their library.
Get full text

Abstract

Federated learning (FL) enables multiple clients to collaboratively train deep learning models under the supervision of a centralized aggregator. Communicating or collecting the local private datasets from multiple edge clients is unauthorized and more vulnerable to training heterogeneity data threats. Despite the fact that numerous studies have been presented to solve this issue, we discover that deep learning models fail to attain good performance in specific tasks or scenarios. In this paper, we revisit the challenge and propose an efficient federated clustering mutual learning framework (FedCML) with an semi-supervised strategy that can avoid the need for the specific empirical parameter to be restricted. We conduct extensive experimental evaluations on two benchmark datasets, and thoroughly compare them to state-of-the-art studies. The results demonstrate the promising performance from FedCML, the accuracy of MNIST and CIFAR10 can be improved by 0.53 % and 1.58 % for non-IID to the utmost extent while ensuring optimal bandwidth efficiency (4.69 × and 4.73 × less than FedAvg/FeSem for the two datasets).

Cite

CITATION STYLE

APA

Chen, Z., Wang, F., Yu, S., Liu, X., & Zheng, Z. (2023). FedCML: Federated Clustering Mutual Learning with non-IID Data. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 14100 LNCS, pp. 623–636). Springer Science and Business Media Deutschland GmbH. https://doi.org/10.1007/978-3-031-39698-4_42

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free