Authors
Mark Buckler, Philip Bedoukian, Suren Jayasuriya, Adrian Sampson
Publication date
2018/6/1
Conference
2018 ACM/IEEE 45th Annual International Symposium on Computer Architecture (ISCA)
Pages
533-546
Publisher
IEEE
Description
Hardware support for deep convolutional neural networks (CNNs) is critical to advanced computer vision in mobile and embedded devices. Current designs, however, accelerate generic CNNs; they do not exploit the unique characteristics of real-time vision. We propose to use the temporal redundancy in natural video to avoid unnecessary computation on most frames. A new algorithm, activation motion compensation, detects changes in the visual input and incrementally updates a previously-computed activation. The technique takes inspiration from video compression and applies well-known motion estimation techniques to adapt to visual changes. We use an adaptive key frame rate to control the trade-off between efficiency and vision quality as the input changes. We implement the technique in hardware as an extension to state-of-the-art CNN accelerator designs. The new unit reduces the average energy per …
Total citations
2018201920202021202220232024217162020136
Scholar articles
M Buckler, P Bedoukian, S Jayasuriya, A Sampson - 2018 ACM/IEEE 45th Annual International Symposium …, 2018