TY - GEN
T1 - RESPARC
T2 - 54th Annual Design Automation Conference, DAC 2017
AU - Ankit, Aayush
AU - Sengupta, Abhronil
AU - Panda, Priyadarshini
AU - Roy, Kaushik
N1 - Publisher Copyright:
© 2017 ACM.
PY - 2017/6/18
Y1 - 2017/6/18
N2 - Neuromorphic computing using post-CMOS technologies is gaining immense popularity due to its promising abilities to address the memory and power bottlenecks in von-Neumann computing systems. In this paper, we propose RESPARC - a reconfigurable and energy efficient architecture built-on Memristive Crossbar Arrays (MCA) for deep Spiking Neural Networks (SNNs). Prior works were primarily focused on device and circuit implementations of SNNs on crossbars. RESPARC advances this by proposing a complete system for SNN acceleration and its subsequent analysis. RESPARC utilizes the energy-efficiency of MCAs for inner-product computation and realizes a hierarchical reconfigurable design to incorporate the data-flow patterns in an SNN in a scalable fashion. We evaluate the proposed architecture on different SNNs ranging in complexity from 2k-230k neurons and 1.2M-5.5M synapses. Simulation results on these networks show that compared to the baseline digital CMOS architecture, RESPARC achieves 500x (15x) efficiency in energy benefits at 300x (60x) higher throughput for multi-layer perceptrons (deep convolutional networks). Furthermore, RESPARC is a technology-aware architecture that maps a given SNN topology to the most optimized MCA size for the given crossbar technology.
AB - Neuromorphic computing using post-CMOS technologies is gaining immense popularity due to its promising abilities to address the memory and power bottlenecks in von-Neumann computing systems. In this paper, we propose RESPARC - a reconfigurable and energy efficient architecture built-on Memristive Crossbar Arrays (MCA) for deep Spiking Neural Networks (SNNs). Prior works were primarily focused on device and circuit implementations of SNNs on crossbars. RESPARC advances this by proposing a complete system for SNN acceleration and its subsequent analysis. RESPARC utilizes the energy-efficiency of MCAs for inner-product computation and realizes a hierarchical reconfigurable design to incorporate the data-flow patterns in an SNN in a scalable fashion. We evaluate the proposed architecture on different SNNs ranging in complexity from 2k-230k neurons and 1.2M-5.5M synapses. Simulation results on these networks show that compared to the baseline digital CMOS architecture, RESPARC achieves 500x (15x) efficiency in energy benefits at 300x (60x) higher throughput for multi-layer perceptrons (deep convolutional networks). Furthermore, RESPARC is a technology-aware architecture that maps a given SNN topology to the most optimized MCA size for the given crossbar technology.
UR - http://www.scopus.com/inward/record.url?scp=85023593996&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=85023593996&partnerID=8YFLogxK
U2 - 10.1145/3061639.3062311
DO - 10.1145/3061639.3062311
M3 - Conference contribution
AN - SCOPUS:85023593996
T3 - Proceedings - Design Automation Conference
BT - Proceedings of the 54th Annual Design Automation Conference 2017, DAC 2017
PB - Institute of Electrical and Electronics Engineers Inc.
Y2 - 18 June 2017 through 22 June 2017
ER -