Authors
Sashank J Reddi, Suvrit Sra, Barnabas Poczos, Alex Smola
Publication date
2016/5/23
Conference
Advances in Neural Information Processing Systems
Issue
arXiv:1605.06900
Description
We analyze stochastic algorithms for optimizing nonconvex, nonsmooth finite-sum problems, where the nonsmooth part is convex. Surprisingly, unlike the smooth case, our knowledge of this fundamental problem is very limited. For example, it is not known whether the proximal stochastic gradient method with constant minibatch converges to a stationary point. To tackle this issue, we develop fast stochastic algorithms that provably converge to a stationary point for constant minibatches. Furthermore, using a variant of these algorithms, we obtain provably faster convergence than batch proximal gradient descent. Our results are based on the recent variance reduction techniques for convex optimization but with a novel analysis for handling nonconvex and nonsmooth functions. We also prove global linear convergence rate for an interesting subclass of nonsmooth nonconvex functions, which subsumes several recent works.
Total citations
20162017201820192020202120222023202471928393624343720
Scholar articles
SJ Reddi, S Sra, B Poczos, AJ Smola - Advances in neural information processing systems, 2016