Authors
Cong Leng, Zesheng Dou, Hao Li, Shenghuo Zhu, Rong Jin
Publication date
2018/4/29
Journal
Proceedings of the AAAI conference on artificial intelligence
Volume
32
Issue
1
Description
Although deep learning models are highly effective for various learning tasks, their high computational costs prohibit the deployment to scenarios where either memory or computational resources are limited. In this paper, we focus on compressing and accelerating deep models with network weights represented by very small numbers of bits, referred to as extremely low bit neural network. We model this problem as a discretely constrained optimization problem. Borrowing the idea from Alternating Direction Method of Multipliers (ADMM), we decouple the continuous parameters from the discrete constraints of network, and cast the original hard problem into several subproblems. We propose to solve these subproblems using extragradient and iterative quantization algorithms that lead to considerably faster convergency compared to conventional optimization methods. Extensive experiments on image recognition and object detection verify that the proposed algorithm is more effective than state-of-the-art approaches when coming to extremely low bit neural network.
Total citations
20172018201920202021202220232024119467183675330
Scholar articles
C Leng, Z Dou, H Li, S Zhu, R Jin - Proceedings of the AAAI conference on artificial …, 2018