Authors
Francesco Conti, Pasquale Davide Schiavone, Luca Benini
Publication date
2018/7/18
Journal
IEEE Transactions on Computer-Aided Design of Integrated Circuits and Systems
Volume
37
Issue
11
Pages
2940-2951
Publisher
IEEE
Description
Binary neural networks (BNNs) are promising to deliver accuracy comparable to conventional deep neural networks at a fraction of the cost in terms of memory and energy. In this paper, we introduce the XNOR neural engine (XNE), a fully digital configurable hardware accelerator IP for BNNs, integrated within a microcontroller unit (MCU) equipped with an autonomous I/O subsystem and hybrid SRAM/standard cell memory. The XNE is able to fully compute convolutional and dense layers in autonomy or in cooperation with the core in the MCU to realize more complex behaviors. We show post-synthesis results in 65- and 22-nm technology for the XNE IP and post-layout results in 22 nm for the full MCU indicating that this system can drop the energy cost per binary operation to 21.6 fJ per operation at 0.4 V, and at the same time is flexible and performant enough to execute state-of-the-art BNN topologies such as …
Total citations
2018201920202021202220232024415253228337
Scholar articles
F Conti, PD Schiavone, L Benini - IEEE Transactions on Computer-Aided Design of …, 2018