TY - GEN
T1 - Training Deep Spiking Neural Networks for Energy-Efficient Neuromorphic Computing
AU - Srinivasan, Gopalakrishnan
AU - Lee, Chankyu
AU - Sengupta, Abhronil
AU - Panda, Priyadarshini
AU - Sarwar, Syed Shakib
AU - Roy, Kaushik
N1 - Funding Information:
This work was supported in part by the Center for Brain Inspired Computing (C-BRIC), one of the six centers in JUMP, a Semiconductor Research Corporation (SRC) program sponsored by DARPA, by the Semiconductor Research Corporation, the National Science Foundation, Intel Corporation, the DoD Vannevar Bush Fellowship, and by the U.S. Army Research Laboratory and the U.K. Ministry of Defense under Agreement Number W911NF-16-3-0001.
Publisher Copyright:
© 2020 IEEE.
PY - 2020/5
Y1 - 2020/5
N2 - Spiking Neural Networks (SNNs), widely known as the third generation of neural networks, encode input information temporally using sparse spiking events, which can be harnessed to achieve higher computational efficiency for cognitive tasks. However, considering the rapid strides in accuracy enabled by state-of-the-art Analog Neural Networks (ANNs), SNN training algorithms are much less mature, leading to accuracy gap between SNNs and ANNs. In this paper, we propose different SNN training methodologies, varying in degrees of biofidelity, and evaluate their efficacy on complex image recognition datasets. First, we present biologically plausible Spike Timing Dependent Plasticity (STDP) based deterministic and stochastic algorithms for unsupervised representation learning in SNNs. Our analysis on the CIFAR-10 dataset indicates that STDP-based learning rules enable the convolutional layers to self-learn low-level input features using fewer training examples. However, STDP-based learning is limited in applicability to shallow SNNs (≤4 layers) while yielding considerably lower than state-of-the-art accuracy. In order to scale the SNNs deeper and improve the accuracy further, we propose conversion methodology to map off-the-shelf trained ANN to SNN for energy-efficient inference. We demonstrate 69.96% accuracy for VGG16-SNN on ImageNet. However, ANN-to-SNN conversion leads to high inference latency for achieving the best accuracy. In order to minimize the inference latency, we propose spike-based error backpropagation algorithm using differentiable approximation for the spiking neuron. Our preliminary experiments on CIFAR-10 show that spike-based error backpropagation effectively captures temporal statistics to reduce the inference latency by up to 8× compared to converted SNNs while yielding comparable accuracy.
AB - Spiking Neural Networks (SNNs), widely known as the third generation of neural networks, encode input information temporally using sparse spiking events, which can be harnessed to achieve higher computational efficiency for cognitive tasks. However, considering the rapid strides in accuracy enabled by state-of-the-art Analog Neural Networks (ANNs), SNN training algorithms are much less mature, leading to accuracy gap between SNNs and ANNs. In this paper, we propose different SNN training methodologies, varying in degrees of biofidelity, and evaluate their efficacy on complex image recognition datasets. First, we present biologically plausible Spike Timing Dependent Plasticity (STDP) based deterministic and stochastic algorithms for unsupervised representation learning in SNNs. Our analysis on the CIFAR-10 dataset indicates that STDP-based learning rules enable the convolutional layers to self-learn low-level input features using fewer training examples. However, STDP-based learning is limited in applicability to shallow SNNs (≤4 layers) while yielding considerably lower than state-of-the-art accuracy. In order to scale the SNNs deeper and improve the accuracy further, we propose conversion methodology to map off-the-shelf trained ANN to SNN for energy-efficient inference. We demonstrate 69.96% accuracy for VGG16-SNN on ImageNet. However, ANN-to-SNN conversion leads to high inference latency for achieving the best accuracy. In order to minimize the inference latency, we propose spike-based error backpropagation algorithm using differentiable approximation for the spiking neuron. Our preliminary experiments on CIFAR-10 show that spike-based error backpropagation effectively captures temporal statistics to reduce the inference latency by up to 8× compared to converted SNNs while yielding comparable accuracy.
UR - http://www.scopus.com/inward/record.url?scp=85089225636&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=85089225636&partnerID=8YFLogxK
U2 - 10.1109/ICASSP40776.2020.9053914
DO - 10.1109/ICASSP40776.2020.9053914
M3 - Conference contribution
AN - SCOPUS:85089225636
T3 - ICASSP, IEEE International Conference on Acoustics, Speech and Signal Processing - Proceedings
SP - 8549
EP - 8553
BT - 2020 IEEE International Conference on Acoustics, Speech, and Signal Processing, ICASSP 2020 - Proceedings
PB - Institute of Electrical and Electronics Engineers Inc.
T2 - 2020 IEEE International Conference on Acoustics, Speech, and Signal Processing, ICASSP 2020
Y2 - 4 May 2020 through 8 May 2020
ER -