Differentially Private Learning with Grouped Gradient Clipping

6Citations
Citations of this article
8Readers
Mendeley users who have this article in their library.

Abstract

While deep learning has proved success in many critical tasks by training models from large-scale data, some private information within can be recovered from the released models, leading to the leakage of privacy. To address this problem, this paper presents a differentially private deep learning paradigm to train private models. In the approach, we propose and incorporate a simple operation termed grouped gradient clipping to modulate the gradient weights. We also incorporated the smooth sensitivity mechanism into differentially private deep learning paradigm, which bounds the adding Gaussian noise. In this way, the resulting model can simultaneously provide with strong privacy protection and avoid accuracy degradation, providing a good trade-off between privacy and performance. The theoretic advantages of grouped gradient clipping are well analyzed. Extensive evaluations on popular benchmarks and comparisons with 11 state-of-the-arts clearly demonstrate the effectiveness and genearalizability of our approach.

References Powered by Scopus

Calibrating noise to sensitivity in private data analysis

5250Citations
N/AReaders
Get full text

The algorithmic foundations of differential privacy

5197Citations
N/AReaders
Get full text

Deep learning with differential privacy

3929Citations
N/AReaders
Get full text

Cited by Powered by Scopus

Differentially Private Deep Learning With Dynamic Privacy Budget Allocation and Adaptive Optimization

13Citations
N/AReaders
Get full text

Adaptive differential privacy in vertical federated learning for mobility forecasting

8Citations
N/AReaders
Get full text

Preserving Differential Privacy in Deep Learning Based on Feature Relevance Region Segmentation

3Citations
N/AReaders
Get full text

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Cite

CITATION STYLE

APA

Liu, H., Li, C., Liu, B., Wang, P., Ge, S., & Wang, W. (2021). Differentially Private Learning with Grouped Gradient Clipping. In ACM International Conference Proceeding Series. Association for Computing Machinery. https://doi.org/10.1145/3469877.3490594

Readers' Seniority

Tooltip

PhD / Post grad / Masters / Doc 2

67%

Lecturer / Post doc 1

33%

Readers' Discipline

Tooltip

Computer Science 3

100%

Save time finding and organizing research with Mendeley

Sign up for free