Authors
Zhouhan Lin, Matthieu Courbariaux, Roland Memisevic, Yoshua Bengio
Publication date
2015/10/11
Journal
ICLR 2016: 4th International Conference on Learning Representations
Description
For most deep learning algorithms training is notoriously time consuming. Since most of the computation in training neural networks is typically spent on floating point multiplications, we investigate an approach to training that eliminates the need for most of these. Our method consists of two parts: First we stochastically binarize weights to convert multiplications involved in computing hidden states to sign changes. Second, while back-propagating error derivatives, in addition to binarizing the weights, we quantize the representations at each layer to convert the remaining multiplications into binary shifts. Experimental results across 3 popular datasets (MNIST, CIFAR10, SVHN) show that this approach not only does not hurt classification performance but can result in even better performance than standard stochastic gradient descent training, paving the way to fast, hardware-friendly training of neural networks.
Total citations
2014201520162017201820192020202120222023202427375266595351394111
Scholar articles
Z Lin, M Courbariaux, R Memisevic, Y Bengio - arXiv preprint arXiv:1510.03009, 2015