On recurrent neural networks and representing finite-state recognizers

M. W. Goudreau, C. L. Giles

Research output: Contribution to journalArticlepeer-review

Abstract

A discussion on the representational abilities of Single Layer Recurrent Neural Networks (SLRNNs) is presented. The fact that SLRNNs can not implement all finite-state recognizers is addressed. However, there are methods that can be used to expand the representational abilities of SLRNNs, and some of these are explained. We will call such systems augmented SLRNNs. Some possibilities for augmented SLRNNs are: adding a layer of feedforward neurons to the SLRNN, allowing the SLRNN to have an extra time step to calculate the solution, and increasing the order of the SLRNN. It is significant that, for some problems, some augmented SLRNNs must actually implement a non-minimal finite-state recognizer that is equivalent to the desired finite-state recognizer. Simulations are performed that demonstrate the use of both a SLRNN and an augmented SLRNN for the problem of learning an odd parity finite-state recognizer using a gradient descent method.

Original languageEnglish (US)
Pages (from-to)51-54
Number of pages4
JournalIEE Conference Publication
Issue number372
StatePublished - 1993

All Science Journal Classification (ASJC) codes

  • Electrical and Electronic Engineering

Fingerprint

Dive into the research topics of 'On recurrent neural networks and representing finite-state recognizers'. Together they form a unique fingerprint.

Cite this