Abstract
One of the issues in any learning model is how it scales with problem size. The problem of learning finite state machine (FSMs) from examples with recurrent neural networks has been extensively explored. However, these results are somewhat disappointing in the sense that the machines that can be learned are too small to be competitive with existing grammatical inference algorithms. We show that a type of recurrent neural network (Narendra & Parthasarathy, 1990, IEEE Trans. Neural Networks, 1, 4-27) which has feedback but no hidden state neurons can learn a special type of FSM called a finite memory machine (FMM) under certain constraints. These machines have a large number of states (simulations are for 256 and 512 state FMMs) but have minimal order, relatively small depth and little logic when the FMM is implemented as a sequential machine.
Original language | English (US) |
---|---|
Pages (from-to) | 1359-1365 |
Number of pages | 7 |
Journal | Neural Networks |
Volume | 8 |
Issue number | 9 |
DOIs | |
State | Published - 1995 |
All Science Journal Classification (ASJC) codes
- Cognitive Neuroscience
- Artificial Intelligence