Distributed differential privacy via shuffling

193Citations
Citations of this article
131Readers
Mendeley users who have this article in their library.
Get full text

Abstract

We consider the problem of designing scalable, robust protocols for computing statistics about sensitive data. Specifically, we look at how best to design differentially private protocols in a distributed setting, where each user holds a private datum. The literature has mostly considered two models: the “central” model, in which a trusted server collects users’ data in the clear, which allows greater accuracy; and the “local” model, in which users individually randomize their data, and need not trust the server, but accuracy is limited. Attempts to achieve the accuracy of the central model without a trusted server have so far focused on variants of cryptographic multiparty computation (MPC), which limits scalability. In this paper, we initiate the analytic study of a shuffled model for distributed differentially private algorithms, which lies between the local and central models. This simple-to-implement model, a special case of the ESA framework of [5], augments the local model with an anonymous channel that randomly permutes a set of user-supplied messages. For sum queries, we show that this model provides the power of the central model while avoiding the need to trust a central server and the complexity of cryptographic secure function evaluation. More generally, we give evidence that the power of the shuffled model lies strictly between those of the central and local models: for a natural restriction of the model, we show that shuffled protocols for a widely studied selection problem require exponentially higher sample complexity than do central-model protocols.

References Powered by Scopus

Calibrating noise to sensitivity in private data analysis

5036Citations
N/AReaders
Get full text

Untraceable Electronic Mail, Return Addresses, and Digital Pseudonyms

3168Citations
N/AReaders
Get full text

Randomized Response: A Survey Technique for Eliminating Evasive Answer Bias

2391Citations
N/AReaders
Get full text

Cited by Powered by Scopus

Advances and open problems in federated learning

2737Citations
N/AReaders
Get full text

Secure, privacy-preserving and federated machine learning in medical imaging

684Citations
N/AReaders
Get full text

Secure Single-Server Aggregation with (Poly)Logarithmic Overhead

257Citations
N/AReaders
Get full text

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Cite

CITATION STYLE

APA

Cheu, A., Smith, A., Ullman, J., Zeber, D., & Zhilyaev, M. (2019). Distributed differential privacy via shuffling. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 11476 LNCS, pp. 375–403). Springer Verlag. https://doi.org/10.1007/978-3-030-17653-2_13

Readers' Seniority

Tooltip

PhD / Post grad / Masters / Doc 54

72%

Researcher 11

15%

Professor / Associate Prof. 6

8%

Lecturer / Post doc 4

5%

Readers' Discipline

Tooltip

Computer Science 62

81%

Engineering 8

10%

Chemical Engineering 5

6%

Physics and Astronomy 2

3%

Save time finding and organizing research with Mendeley

Sign up for free