TY - GEN
T1 - Neuromorphic Computing Across the Stack
T2 - 2018 IEEE Workshop on Signal Processing Systems, SiPS 2018
AU - Ankit, Aayush
AU - Sengupta, Abhronil
AU - Roy, Kaushik
N1 - Funding Information:
The work was supported in part by, Center for Braininspired Computing Enabling Autonomous Intelligence (CBRIC), a DARPA sponsored JUMP center, by the Semiconductor Research Corporation, the National Science Foundation, Intel Corporation and by the DoD Vannevar Bush Fellowship.
Funding Information:
The work was supported in part by, Center for Brain-inspired Computing Enabling Autonomous Intelligence (C-BRIC), a DARPA sponsored JUMP center, by the Semiconductor Research Corporation, the National Science Foundation, Intel Corporation and by the DoD Vannevar Bush Fellowship.
Publisher Copyright:
© 2018 IEEE.
PY - 2018/12/31
Y1 - 2018/12/31
N2 - Current machine learning workloads are constrained by their large power and energy requirements. In order to address these issues, recent years have witnessed increasing interest at exploring static sparsity (synaptic memory storage) and dynamic sparsity (neural activation using spikes) in neural networks in order to reduce the necessary computational resources and enable low-power event-driven network operation. Parallely, there have been efforts to realize in-memory computing circuit primitives using emerging device technologies to alleviate the memory bandwidth limitations present in CMOS based neuromorphic computing platforms. In this paper, we discuss these two parallel research thrusts and explore the manner in-which synergistic hardware-algorithm co-design in neuromorphic computing across the stack (from devices and circuits to architectural frameworks) can result in orders of magnitude efficiency compared to state-of-the-art CMOS implementations.
AB - Current machine learning workloads are constrained by their large power and energy requirements. In order to address these issues, recent years have witnessed increasing interest at exploring static sparsity (synaptic memory storage) and dynamic sparsity (neural activation using spikes) in neural networks in order to reduce the necessary computational resources and enable low-power event-driven network operation. Parallely, there have been efforts to realize in-memory computing circuit primitives using emerging device technologies to alleviate the memory bandwidth limitations present in CMOS based neuromorphic computing platforms. In this paper, we discuss these two parallel research thrusts and explore the manner in-which synergistic hardware-algorithm co-design in neuromorphic computing across the stack (from devices and circuits to architectural frameworks) can result in orders of magnitude efficiency compared to state-of-the-art CMOS implementations.
UR - http://www.scopus.com/inward/record.url?scp=85061379381&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=85061379381&partnerID=8YFLogxK
U2 - 10.1109/SiPS.2018.8598419
DO - 10.1109/SiPS.2018.8598419
M3 - Conference contribution
AN - SCOPUS:85061379381
T3 - IEEE Workshop on Signal Processing Systems, SiPS: Design and Implementation
SP - 1
EP - 6
BT - Proceedings of the IEEE Workshop on Signal Processing Systems, SiPS 2018
PB - Institute of Electrical and Electronics Engineers Inc.
Y2 - 21 October 2018 through 24 October 2018
ER -