A Dynamically Stabilized Recurrent Neural Network

Samer Saab, Yiwei Fu, Asok Ray, Michael Hauser

Research output: Contribution to journalArticlepeer-review

26 Scopus citations


This work proposes a novel recurrent neural network architecture, called the Dynamically Stabilized Recurrent Neural Network (DSRNN). The developed DSRNN includes learnable skip-connections across a specified number of time-steps, which allows for a state-space representation of the network’s hidden-state trajectory, and a regularization term is introduced in the loss function in the setting of Lyapunov stability theory. The regularizer enables the placement of eigenvalues of the (linearized) transfer function matrix to desired locations in the complex plane, thereby acting as an internal controller for the hidden-state trajectories. In this way, the DSRNN adjusts the weights of temporal skip-connections to achieve recurrent hidden-state stability, which mitigates the problems of vanishing and exploding gradients. The efficacy of the DSRNN is demonstrated on a forecasting task of a recorded double-pendulum experimental model. The results show that the DSRNN outperforms both the Long Short-Term Memory (LSTM) and vanilla recurrent neural networks, and the relative mean-squared error of the LSTM is reduced by up to ∼ 99.64%. The DSRNN also showed comparable results to the LSTM on a classification task of two Lorenz oscillator systems.

Original languageEnglish (US)
Pages (from-to)1195-1209
Number of pages15
JournalNeural Processing Letters
Issue number2
StatePublished - Apr 2022

All Science Journal Classification (ASJC) codes

  • Software
  • General Neuroscience
  • Computer Networks and Communications
  • Artificial Intelligence


Dive into the research topics of 'A Dynamically Stabilized Recurrent Neural Network'. Together they form a unique fingerprint.

Cite this