TY - GEN
T1 - Equilibrium-Based Learning Dynamics in Spiking Architectures
AU - Bal, Malyaban
AU - Sengupta, Abhronil
N1 - Publisher Copyright:
© 2024 IEEE.
PY - 2024
Y1 - 2024
N2 - This paper delves into methodologies that treat spiking architectures as continuously evolving dynamical systems, revealing intriguing parallels with the learning dynamics in the brain. The methods discussed in this paper addresses multiple challenges of training spiking architectures and highlights the necessity for bio-plausible local learning and increasing model scalability in spiking architectures. We begin by exploring an energy-based learning mechanism, namely Equilibrium Propagation (EP), which emphasizes the attainment of stable states by converging to energy minimas at each training phase, thus allowing for formulation of spatially and temporally local state and weight update rules. Subsequently, we delve into the synergy achieved by integrating the underlying energy-based convergent RNN architecture with a different energy-based model, namely modern Hopfield networks, thereby amplifying the capabilities of the resultant model. We further explore an efficient learning framework rooted in the convergence of the average spiking rates of neurons, which can be leveraged to advance the creation of highly scalable spiking architectures. The methodologies discussed allows spiking architectures to transition beyond simple vision-related tasks and develop solutions for complex sequence learning problems. Moreover, both the frameworks can be used to develop spiking architectures which can be deployed in neuromorphic hardware to realize their energy/power efficiency.
AB - This paper delves into methodologies that treat spiking architectures as continuously evolving dynamical systems, revealing intriguing parallels with the learning dynamics in the brain. The methods discussed in this paper addresses multiple challenges of training spiking architectures and highlights the necessity for bio-plausible local learning and increasing model scalability in spiking architectures. We begin by exploring an energy-based learning mechanism, namely Equilibrium Propagation (EP), which emphasizes the attainment of stable states by converging to energy minimas at each training phase, thus allowing for formulation of spatially and temporally local state and weight update rules. Subsequently, we delve into the synergy achieved by integrating the underlying energy-based convergent RNN architecture with a different energy-based model, namely modern Hopfield networks, thereby amplifying the capabilities of the resultant model. We further explore an efficient learning framework rooted in the convergence of the average spiking rates of neurons, which can be leveraged to advance the creation of highly scalable spiking architectures. The methodologies discussed allows spiking architectures to transition beyond simple vision-related tasks and develop solutions for complex sequence learning problems. Moreover, both the frameworks can be used to develop spiking architectures which can be deployed in neuromorphic hardware to realize their energy/power efficiency.
UR - http://www.scopus.com/inward/record.url?scp=85198543464&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=85198543464&partnerID=8YFLogxK
U2 - 10.1109/ISCAS58744.2024.10558225
DO - 10.1109/ISCAS58744.2024.10558225
M3 - Conference contribution
AN - SCOPUS:85198543464
T3 - Proceedings - IEEE International Symposium on Circuits and Systems
BT - ISCAS 2024 - IEEE International Symposium on Circuits and Systems
PB - Institute of Electrical and Electronics Engineers Inc.
T2 - 2024 IEEE International Symposium on Circuits and Systems, ISCAS 2024
Y2 - 19 May 2024 through 22 May 2024
ER -