Authors
Tom Schaul, Sixin Zhang, Yann LeCun
Publication date
2013
Conference
International Conference on Machine Learning (ICML'13)
Description
The performance of stochastic gradient descent (SGD) depends critically on how learning rates are tuned and decreased over time. We propose a method to automatically adjust multiple learning rates so as to minimize the expected error at any one time. The method relies on local gradient variations across samples. In our approach, learning rates can increase as well as decrease, making it suitable for non-stationary problems. Using a number of convex and non-convex learning tasks, we show that the resulting algorithm matches the performance of the best settings obtained through systematic search, and effectively removes the need for learning rate tuning.
Total citations
20122013201420152016201720182019202020212022202320246173537556256646457425225
Scholar articles
T Schaul, S Zhang, Y LeCun - International conference on machine learning, 2013
T Schaul, S Zhang, Y LeCun - arXiv preprint arXiv:1206.1106, 2012
T Schaul, S Zhang, Y LeCun - arXiv preprint arXiv:1206.1106