Extraction, Insertion and Refinement of Symbolic Rules in Dynamically Driven Recurrent Neural Networks

C. Lee Giles, Christian W. Onilin

Research output: Contribution to journalArticlepeer-review

58 Scopus citations

Abstract

Recurrent neural networks readily process, learn and generate temporal sequences. In addition, they have been shown to have impressive computational power. Recurrent neural networks can be trained with symbolic string examples encoded as temporal sequences to behave like sequential finite state recognizers. We discuss methods for extracting, inserting and refining symbolic grammatical rules for recurrent networks. This paper discusses various issues: how rules are inserted into recurrent networks, how they affect training and generalization, and how those rules can be checked and corrected. The capability of exchanging information between a symbolic representation (grammatical rules) and a connectionist representation (trained weights) has interesting implications. After partially known rules are inserted, recurrent networks can be trained to preserve inserted rules that were correct and to correct through training inserted rules that were ‘incorrect’—rules inconsistent with the training data.

Original languageEnglish (US)
Pages (from-to)307-337
Number of pages31
JournalConnection Science
Volume5
Issue number3-4
DOIs
StatePublished - Jan 1993

All Science Journal Classification (ASJC) codes

  • Software
  • Human-Computer Interaction
  • Artificial Intelligence

Fingerprint

Dive into the research topics of 'Extraction, Insertion and Refinement of Symbolic Rules in Dynamically Driven Recurrent Neural Networks'. Together they form a unique fingerprint.

Cite this