A provably stable neural network Turing Machine with finite precision and time

John Stogin, Ankur Mali, C. Lee Giles

Research output: Contribution to journalArticlepeer-review

Abstract

We introduce a neural stack architecture with a differentiable parameterized stack operator approximating stack push and pop operations. We prove the stability of this stack architecture for arbitrarily many stack operations, showing that the state of the neural stack still closely resembles the state of a discrete stack. Using the neural stack with a recurrent neural network, we devise a neural network Pushdown Automaton (nnPDA). A new theoretical bound shows that an nnPDA can recognize any PDA using only finite precision state neurons in finite time. By using two neural stacks to construct a neural tape together with a recurrent neural network, we define a neural network Turing Machine (nnTM). Just like the neural stack, we show these architectures are also stable. Furthermore, we show the nnTM is Turing complete. It requires finite precision state neurons with an arbitrary number of stack neurons to recognize any TM in finite time, thus providing a new and much stronger computational upper bound for neural networks that are Turing complete.

Original languageEnglish (US)
Article number120034
JournalInformation Sciences
Volume658
DOIs
StatePublished - Feb 2024

All Science Journal Classification (ASJC) codes

  • Software
  • Control and Systems Engineering
  • Theoretical Computer Science
  • Computer Science Applications
  • Information Systems and Management
  • Artificial Intelligence

Cite this