Inserting rules into recurrent neural networks

C. L. Giles, C. W. Omlin

Research output: Chapter in Book/Report/Conference proceedingConference contribution

23 Scopus citations


We present a method that incorporates a priori knowledge in the training of recurrent neural networks. This a priori knowledge can be interpreted as hints about the problem to be learned and these hints are encoded as rules which are then inserted into the neural network. We demonstrate the approach by training recurrent neural networks with inserted rules to learn to recognize regular languages from grammatical string examples. Because the recurrent networks have second-order connections, rule-insertion is a straightforward mapping of rules into weights and neurons. Simulations show that training recurrent networks with different amounts of partial knowledge to recognize simple grammars improves the training time by orders of magnitude, even when only a small fraction of all transitions are inserted as rules. In addition there is appears to be no loss in generalization performance.

Original languageEnglish (US)
Title of host publicationNeural Networks for Signal Processing II - Proceedings of the 1992 IEEE Workshop
EditorsC.A. Kamm, S.Y. Kung, J. Aa. Sorenson, F. Fallside
PublisherInstitute of Electrical and Electronics Engineers Inc.
Number of pages10
ISBN (Electronic)0780305574
StatePublished - Jan 1 1992
Event1992 IEEE Workshop on Neural Networks for Signal Processing II - Helsingoer, Denmark
Duration: Aug 31 1992Sep 2 1992

Publication series

NameNeural Networks for Signal Processing - Proceedings of the IEEE Workshop


Other1992 IEEE Workshop on Neural Networks for Signal Processing II

All Science Journal Classification (ASJC) codes

  • Electrical and Electronic Engineering
  • Artificial Intelligence
  • Software
  • Computer Networks and Communications
  • Signal Processing


Dive into the research topics of 'Inserting rules into recurrent neural networks'. Together they form a unique fingerprint.

Cite this