Authors
Guiying Li, Chao Qian, Chunhui Jiang, Xiaofen Lu, Ke Tang
Publication date
2018/7/13
Journal
IJCAI
Volume
330
Pages
2383-2389
Description
Layer-wise magnitude-based pruning (LMP) is a very popular method for deep neural network (DNN) compression. However, tuning the layerspecific thresholds is a difficult task, since the space of threshold candidates is exponentially large and the evaluation is very expensive. Previous methods are mainly by hand and require expertise. In this paper, we propose an automatic tuning approach based on optimization, named OLMP. The idea is to transform the threshold tuning problem into a constrained optimization problem (ie, minimizing the size of the pruned model subject to a constraint on the accuracy loss), and then use powerful derivative-free optimization algorithms to solve it. To compress a trained DNN, OLMP is conducted within a new iterative pruning and adjusting pipeline. Empirical results show that OLMP can achieve the best pruning ratio on LeNet-style models (ie, 114 times for LeNet-300-100 and 298 times for LeNet-5) compared with some state-ofthe-art DNN pruning methods, and can reduce the size of an AlexNet-style network up to 82 times without accuracy loss.
Total citations
20182019202020212022202320241888221710