First-Order Versus Second-Order Single-Layer Recurrent Neural Networks

Mark W. Goudreau, C. Lee Giles, Srimat T. Chakradhar, D. Chen

Research output: Contribution to journalArticlepeer-review

69 Scopus citations


We examine the representational capabilities of first-order and second-order single-layer recurrent neural networks (SLRNN's) with hard-limiting neurons. We show that a second-order SLRNN is strictly more powerful than a first-order SLRNN. However, if the first-order SLRNN is augmented with output layers of feedforward neurons, it can implement any finite-state recognizer, but only if state-splitting is employed. When a state is split, it is divided into two equivalent states. The judicious use of state-splitting allows for efficient implementation of finite-state recognizers using augmented first-order SLRNN's.

Original languageEnglish (US)
Pages (from-to)511-513
Number of pages3
JournalIEEE Transactions on Neural Networks
Issue number3
StatePublished - May 1994

All Science Journal Classification (ASJC) codes

  • Software
  • Computer Science Applications
  • Computer Networks and Communications
  • Artificial Intelligence


Dive into the research topics of 'First-Order Versus Second-Order Single-Layer Recurrent Neural Networks'. Together they form a unique fingerprint.

Cite this