Learning a class of large finite state machines with a recurrent neural network

C. Lee Giles, B. G. Horne, T. Lin

Research output: Contribution to journalArticlepeer-review

32 Scopus citations

Abstract

One of the issues in any learning model is how it scales with problem size. The problem of learning finite state machine (FSMs) from examples with recurrent neural networks has been extensively explored. However, these results are somewhat disappointing in the sense that the machines that can be learned are too small to be competitive with existing grammatical inference algorithms. We show that a type of recurrent neural network (Narendra & Parthasarathy, 1990, IEEE Trans. Neural Networks, 1, 4-27) which has feedback but no hidden state neurons can learn a special type of FSM called a finite memory machine (FMM) under certain constraints. These machines have a large number of states (simulations are for 256 and 512 state FMMs) but have minimal order, relatively small depth and little logic when the FMM is implemented as a sequential machine.

Original languageEnglish (US)
Pages (from-to)1359-1365
Number of pages7
JournalNeural Networks
Volume8
Issue number9
DOIs
StatePublished - 1995

All Science Journal Classification (ASJC) codes

  • Cognitive Neuroscience
  • Artificial Intelligence

Fingerprint

Dive into the research topics of 'Learning a class of large finite state machines with a recurrent neural network'. Together they form a unique fingerprint.

Cite this