Abstract
We examine the representational capabilities of first-order and second-order single-layer recurrent neural networks (SLRNN's) with hard-limiting neurons. We show that a second-order SLRNN is strictly more powerful than a first-order SLRNN. However, if the first-order SLRNN is augmented with output layers of feedforward neurons, it can implement any finite-state recognizer, but only if state-splitting is employed. When a state is split, it is divided into two equivalent states. The judicious use of state-splitting allows for efficient implementation of finite-state recognizers using augmented first-order SLRNN's.
Original language | English (US) |
---|---|
Pages (from-to) | 511-513 |
Number of pages | 3 |
Journal | IEEE Transactions on Neural Networks |
Volume | 5 |
Issue number | 3 |
DOIs | |
State | Published - May 1994 |
All Science Journal Classification (ASJC) codes
- Software
- Computer Science Applications
- Computer Networks and Communications
- Artificial Intelligence