Authors
Wenlin Chen, James T Wilson, Stephen Tyree, Kilian Q Weinberger, Yixin Chen
Publication date
2015/6/14
Journal
arXiv preprint arXiv:1506.04449
Description
Convolutional neural networks (CNN) are increasingly used in many areas of computer vision. They are particularly attractive because of their ability to "absorb" great quantities of labeled data through millions of parameters. However, as model sizes increase, so do the storage and memory requirements of the classifiers. We present a novel network architecture, Frequency-Sensitive Hashed Nets (FreshNets), which exploits inherent redundancy in both convolutional layers and fully-connected layers of a deep learning model, leading to dramatic savings in memory and storage consumption. Based on the key observation that the weights of learned convolutional filters are typically smooth and low-frequency, we first convert filter weights to the frequency domain with a discrete cosine transform (DCT) and use a low-cost hash function to randomly group frequency parameters into hash buckets. All parameters assigned the same hash bucket share a single value learned with standard back-propagation. To further reduce model size we allocate fewer hash buckets to high-frequency components, which are generally less important. We evaluate FreshNets on eight data sets, and show that it leads to drastically better compressed performance than several relevant baselines.
Total citations
201420152016201720182019202020212022202320241214189649414
Scholar articles
W Chen, JT Wilson, S Tyree, KQ Weinberger, Y Chen - arXiv preprint arXiv:1506.04449, 2015