TY - JOUR
T1 - Training Spiking Neural Networks for Cognitive Tasks
T2 - A Versatile Framework Compatible with Various Temporal Codes
AU - Hong, Chaofei
AU - Wei, Xile
AU - Wang, Jiang
AU - Deng, Bin
AU - Yu, Haitao
AU - Che, Yanqiu
N1 - Funding Information:
Manuscript received September 1, 2017; revised July 10, 2018, January 4, 2019, and April 26, 2019; accepted May 24, 2019. Date of publication June 21, 2019; date of current version April 3, 2020. This work was supported in part by the National Natural Science Foundation of China under Grant 61871287, Grant 61471265, and Grant 61671320, and in part by the Tianjin Municipal Special Program of Talents Development for Excellent Youth Scholars. (Corresponding author: Haitao Yu.) C. Hong, X. Wei, J. Wang, B. Deng, and H. Yu are with the School of Electrical and Information Engineering, Tianjin University, Tianjin 300072, China (e-mail: hongchf@tju.edu.cn; xilewei@tju.edu.cn; jiangwang@tju.edu.cn; dengbin@tju.edu.cn; htyu@tju.edu.cn).
Publisher Copyright:
© 2012 IEEE.
PY - 2020/4
Y1 - 2020/4
N2 - Recent studies have demonstrated the effectiveness of supervised learning in spiking neural networks (SNNs). A trainable SNN provides a valuable tool not only for engineering applications but also for theoretical neuroscience studies. Here, we propose a modified SpikeProp learning algorithm, which ensures better learning stability for SNNs and provides more diverse network structures and coding schemes. Specifically, we designed a spike gradient threshold rule to solve the well-known gradient exploding problem in SNN training. In addition, regulation rules on firing rates and connection weights are proposed to control the network activity during training. Based on these rules, biologically realistic features such as lateral connections, complex synaptic dynamics, and sparse activities are included in the network to facilitate neural computation. We demonstrate the versatility of this framework by implementing three well-known temporal codes for different types of cognitive tasks, namely, handwritten digit recognition, spatial coordinate transformation, and motor sequence generation. Several important features observed in experimental studies, such as selective activity, excitatory-inhibitory balance, and weak pairwise correlation, emerged in the trained model. This agreement between experimental and computational results further confirmed the importance of these features in neural function. This work provides a new framework, in which various neural behaviors can be modeled and the underlying computational mechanisms can be studied.
AB - Recent studies have demonstrated the effectiveness of supervised learning in spiking neural networks (SNNs). A trainable SNN provides a valuable tool not only for engineering applications but also for theoretical neuroscience studies. Here, we propose a modified SpikeProp learning algorithm, which ensures better learning stability for SNNs and provides more diverse network structures and coding schemes. Specifically, we designed a spike gradient threshold rule to solve the well-known gradient exploding problem in SNN training. In addition, regulation rules on firing rates and connection weights are proposed to control the network activity during training. Based on these rules, biologically realistic features such as lateral connections, complex synaptic dynamics, and sparse activities are included in the network to facilitate neural computation. We demonstrate the versatility of this framework by implementing three well-known temporal codes for different types of cognitive tasks, namely, handwritten digit recognition, spatial coordinate transformation, and motor sequence generation. Several important features observed in experimental studies, such as selective activity, excitatory-inhibitory balance, and weak pairwise correlation, emerged in the trained model. This agreement between experimental and computational results further confirmed the importance of these features in neural function. This work provides a new framework, in which various neural behaviors can be modeled and the underlying computational mechanisms can be studied.
UR - http://www.scopus.com/inward/record.url?scp=85083199279&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=85083199279&partnerID=8YFLogxK
U2 - 10.1109/TNNLS.2019.2919662
DO - 10.1109/TNNLS.2019.2919662
M3 - Article
C2 - 31247574
AN - SCOPUS:85083199279
SN - 2162-237X
VL - 31
SP - 1285
EP - 1296
JO - IEEE Transactions on Neural Networks and Learning Systems
JF - IEEE Transactions on Neural Networks and Learning Systems
IS - 4
M1 - 8743403
ER -