Authors
Vitaly Feldman, Tomer Koren, Kunal Talwar
Publication date
2020/6/22
Book
Proceedings of the 52nd Annual ACM SIGACT Symposium on Theory of Computing
Pages
439-449
Description
We study differentially private (DP) algorithms for stochastic convex optimization: the problem of minimizing the population loss given i.i.d. samples from a distribution over convex loss functions. A recent work of Bassily et al. (2019) has established the optimal bound on the excess population loss achievable given n samples. Unfortunately, their algorithm achieving this bound is relatively inefficient: it requires O(min{n 3/2, n 5/2/d}) gradient computations, where d is the dimension of the optimization problem.
We describe two new techniques for deriving DP convex optimization algorithms both achieving the optimal bound on excess loss and using O(min{n, n 2/d}) gradient computations. In particular, the algorithms match the running time of the optimal non-private algorithms. The first approach relies on the use of variable batch sizes and is analyzed using the privacy amplification by iteration technique of Feldman et …
Total citations
202020212022202320241038475136
Scholar articles
V Feldman, T Koren, K Talwar - Proceedings of the 52nd Annual ACM SIGACT …, 2020