Infinitely Divisible Noise in the Low Privacy Regime
Publikation: Bidrag til bog/antologi/rapport › Konferencebidrag i proceedings › Forskning › fagfællebedømt
Standard
Infinitely Divisible Noise in the Low Privacy Regime. / Pagh, Rasmus; Stausholm, Nina Mesing.
Proceedings of The 33rd International Conference on Algorithmic Learning Theory. PMLR, 2022. s. 881-909 (Proceedings of Machine Learning Research, Bind 167).Publikation: Bidrag til bog/antologi/rapport › Konferencebidrag i proceedings › Forskning › fagfællebedømt
Harvard
APA
Vancouver
Author
Bibtex
}
RIS
TY - GEN
T1 - Infinitely Divisible Noise in the Low Privacy Regime
AU - Pagh, Rasmus
AU - Stausholm, Nina Mesing
PY - 2022
Y1 - 2022
N2 - Federated learning, in which training data is distributed among users and never shared, has emerged as a popular approach to privacy-preserving machine learning. Cryptographic techniques such as secure aggregation are used to aggregate contributions, like a model update, from all users. A robust technique for making such aggregates differentially private is to exploit \emph{infinite divisibility} of the Laplace distribution, namely, that a Laplace distribution can be expressed as a sum of i.i.d. noise shares from a Gamma distribution, one share added by each user. However, Laplace noise is known to have suboptimal error in the low privacy regime for ε-differential privacy, where ε>1 is a large constant. In this paper we present the first infinitely divisible noise distribution for real-valued data that achieves ε-differential privacy and has expected error that decreases exponentially with ε.
AB - Federated learning, in which training data is distributed among users and never shared, has emerged as a popular approach to privacy-preserving machine learning. Cryptographic techniques such as secure aggregation are used to aggregate contributions, like a model update, from all users. A robust technique for making such aggregates differentially private is to exploit \emph{infinite divisibility} of the Laplace distribution, namely, that a Laplace distribution can be expressed as a sum of i.i.d. noise shares from a Gamma distribution, one share added by each user. However, Laplace noise is known to have suboptimal error in the low privacy regime for ε-differential privacy, where ε>1 is a large constant. In this paper we present the first infinitely divisible noise distribution for real-valued data that achieves ε-differential privacy and has expected error that decreases exponentially with ε.
M3 - Article in proceedings
T3 - Proceedings of Machine Learning Research
SP - 881
EP - 909
BT - Proceedings of The 33rd International Conference on Algorithmic Learning Theory
PB - PMLR
T2 - 33rd International Conference on Algorithmic Learning Theory (ALT 2022)
Y2 - 29 March 2022 through 1 April 2022
ER -
ID: 340697017