Rule refinement with recurrent neural networks

C. Lee Giles, Christian W. Omlin

Research output: Chapter in Book/Report/Conference proceedingConference contribution

16 Scopus citations


Recurrent neural networks can be trained to behave like deterministic finite-state automata (DFA's) and methods have been developed for extracting grammatical rules from trained networks. Using a simple method for inserting prior knowledge of a subset of the DFA state transitions into recurrent neural networks, we show that recurrent neural networks are able to perform rule refinement. The results from training a recurrent neural network to recognize a known non-trivial, randomly generated regular grammar show that not only do the networks preserve correct prior knowledge, but that they are able to correct through training inserted prior knowledge which was wrong. (By wrong, we mean that the inserted rules were not the ones in the randomly generated grammar.)

Original languageEnglish (US)
Title of host publication1993 IEEE International Conference on Neural Networks, ICNN 1993
PublisherInstitute of Electrical and Electronics Engineers Inc.
Number of pages6
ISBN (Electronic)0780309995
StatePublished - 1993
EventIEEE International Conference on Neural Networks, ICNN 1993 - San Francisco, United States
Duration: Mar 28 1993Apr 1 1993

Publication series

NameIEEE International Conference on Neural Networks - Conference Proceedings
ISSN (Print)1098-7576


OtherIEEE International Conference on Neural Networks, ICNN 1993
Country/TerritoryUnited States
CitySan Francisco

All Science Journal Classification (ASJC) codes

  • Software


Dive into the research topics of 'Rule refinement with recurrent neural networks'. Together they form a unique fingerprint.

Cite this