TY - GEN
T1 - Performance analysis and benchmarking of all-spin spiking neural networks (Special session paper)
AU - Sengupta, Abhronil
AU - Ankit, Aayush
AU - Roy, Kaushik
N1 - Funding Information:
The work was supported in part by, Center for Spintronic Materials, Interfaces, and Novel Architectures (C-SPIN), a MARCO and DARPA sponsored StarNet center, by the Semiconductor Research Corporation, the National Science Foundation, Intel Corporation and by the DoD Vannevar Bush Fellowship.
Publisher Copyright:
© 2017 IEEE.
PY - 2017/6/30
Y1 - 2017/6/30
N2 - Spiking Neural Network based brain-inspired computing paradigms are becoming increasingly popular tools for various cognitive tasks. The sparse event-driven processing capability enabled by such networks can be potentially appealing for implementation of low-power neural computing platforms. However, the parallel and memory-intensive computations involved in such algorithms is in complete contrast to the sequential fetch, decode, execute cycles of conventional von-Neumann processors. Recent proposals have investigated the design of spintronic 'in-memory' crossbar based computing architectures driving 'spin neurons' that can potentially alleviate the memory-access bottleneck of CMOS based systems and simultaneously offer the prospect of low-power inner product computations. In this article, we perform a rigorous system-level simulation study of such All-Spin Spiking Neural Networks on a benchmark suite of 6 recognition problems ranging in network complexity from 10k-7.4M synapses and 195-9.2k neurons. System level simulations indicate that the proposed spintronic architecture can potentially achieve ∼1292× energy efficiency and ∼ 235× speedup on average over the benchmark suite in comparison to an optimized CMOS implementation at 45nm technology node.
AB - Spiking Neural Network based brain-inspired computing paradigms are becoming increasingly popular tools for various cognitive tasks. The sparse event-driven processing capability enabled by such networks can be potentially appealing for implementation of low-power neural computing platforms. However, the parallel and memory-intensive computations involved in such algorithms is in complete contrast to the sequential fetch, decode, execute cycles of conventional von-Neumann processors. Recent proposals have investigated the design of spintronic 'in-memory' crossbar based computing architectures driving 'spin neurons' that can potentially alleviate the memory-access bottleneck of CMOS based systems and simultaneously offer the prospect of low-power inner product computations. In this article, we perform a rigorous system-level simulation study of such All-Spin Spiking Neural Networks on a benchmark suite of 6 recognition problems ranging in network complexity from 10k-7.4M synapses and 195-9.2k neurons. System level simulations indicate that the proposed spintronic architecture can potentially achieve ∼1292× energy efficiency and ∼ 235× speedup on average over the benchmark suite in comparison to an optimized CMOS implementation at 45nm technology node.
UR - http://www.scopus.com/inward/record.url?scp=85031034828&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=85031034828&partnerID=8YFLogxK
U2 - 10.1109/IJCNN.2017.7966434
DO - 10.1109/IJCNN.2017.7966434
M3 - Conference contribution
AN - SCOPUS:85031034828
T3 - Proceedings of the International Joint Conference on Neural Networks
SP - 4557
EP - 4563
BT - 2017 International Joint Conference on Neural Networks, IJCNN 2017 - Proceedings
PB - Institute of Electrical and Electronics Engineers Inc.
T2 - 2017 International Joint Conference on Neural Networks, IJCNN 2017
Y2 - 14 May 2017 through 19 May 2017
ER -