Authors
Shuang Liang, Shouyi Yin, Leibo Liu, Wayne Luk, Shaojun Wei
Publication date
2018/1/31
Journal
Neurocomputing
Volume
275
Pages
1072-1086
Publisher
Elsevier
Description
Deep neural networks (DNNs) have attracted significant attention for their excellent accuracy especially in areas such as computer vision and artificial intelligence. To enhance their performance, technologies for their hardware acceleration are being studied. FPGA technology is a promising choice for hardware acceleration, given its low power consumption and high flexibility which makes it suitable particularly for embedded systems. However, complex DNN models may need more computing and memory resources than those available in many current FPGAs. This paper presents FP-BNN, a binarized neural network (BNN) for FPGAs, which drastically cuts down the hardware consumption while maintaining acceptable accuracy. We introduce a Resource-Aware Model Analysis (RAMA) method, and remove the bottleneck involving multipliers by bit-level XNOR and shifting operations, and the bottleneck of …
Total citations
201820192020202120222023202418496873653626
Scholar articles
S Liang, S Yin, L Liu, W Luk, S Wei - Neurocomputing, 2018