Authors
Gao Huang, Yu Sun, Zhuang Liu, Daniel Sedra, Kilian Q Weinberger
Publication date
2016/10/8
Conference
European Conference on Computer Vision (ECCV)
Pages
646-661
Publisher
Springer, Cham
Description
Very deep convolutional networks with hundreds of layers have led to significant reductions in error on competitive benchmarks. Although the unmatched expressiveness of the many layers can be highly desirable at test time, training very deep networks comes with its own set of challenges. The gradients can vanish, the forward flow often diminishes, and the training time can be painfully slow. To address these problems, we propose stochastic depth, a training procedure that enables the seemingly contradictory setup to train short networks and use deep networks at test time. We start with very deep networks but during training, for each mini-batch, randomly drop a subset of layers and bypass them with the identity function. This simple approach complements the recent success of residual networks. It reduces training time substantially and improves the test error significantly on almost all data sets that we …
Total citations
20162017201820192020202120222023202461151239291309401473487251
Scholar articles
G Huang, Y Sun, Z Liu, D Sedra, KQ Weinberger - Computer Vision–ECCV 2016: 14th European …, 2016
W Huang, F Gao, K Li, W Wang, YR Lai, SH Tang…