Autoren
Petar Jokic, Stephane Emery, Luca Benini
Publikationsdatum
2020/7/16
Zeitschrift
IEEE Embedded Systems Letters
Band
13
Ausgabe
3
Seiten
77-80
Verlag
IEEE
Beschreibung
While the accuracy of convolutional neural networks (CNNs) has achieved vast improvements by introducing larger and deeper network architectures, also the memory footprint for storing their parameters and activations has increased. This trend especially challenges power- and resource-limited accelerator designs, which are often restricted to store all network data in on-chip memory to avoid interfacing energy-hungry external memories. Maximizing the network size that fits on a given accelerator thus requires to maximize its memory utilization. While the traditionally used ping-pong buffering technique is mapping subsequent activation layers to disjunctive memory regions, we propose a mapping method that allows these regions to overlap and thus utilize the memory more efficiently. This letter presents the mathematical model to compute the maximum activations memory overlap and thus the lower bound of on …
Zitate insgesamt
20212022202320246241
Google Scholar-Artikel