TY - JOUR
T1 - Time-delay neural networks
T2 - Representation and induction of finite-state machines
AU - Clouse, Daniel S.
AU - Giles, C. Lee
AU - Horne, Bill G.
AU - Cottrell, Garrison W.
N1 - Funding Information:
Manuscript received May 10, 1995; revised June 19, 1996 and April 2, 1997. D. S. Clouse was supported in part by a USPHS Predoctoral Traineeship. D. S. Clouse and G. W. Cottrell are with the University of California, San Diego, La Jolla, CA 92093-0114 USA. C. L. Giles is with NEC Research Institute, Princeton, NJ 08540 USA. He is also with the Institute for Advanced Computer Studies, University of Maryland, College Park, MD 20742 USA. B. G. Horne was with NEC Research Institute, Princeton, NJ 08540 USA. He is now with ADM Consulting, Califon, NJ 07830 USA. Publisher Item Identifier S 1045-9227(97)06048-7.
PY - 1997
Y1 - 1997
N2 - In this work, we characterize and contrast the capabilities of the general class of time-delay neural networks (TDNN's) with input delay neural networks (IDNN's), the subclass of TDNN's with delays limited to the inputs. Each class of networks is capable of representing the same set of languages, those embodied by the definite memory machines (DMM's), a subclass of finite-state machines. We demonstrate the close affinity between TDNN's and DMM languages by learning a very large DMM (2048 states) using only a few training examples. Even though both architectures are capable of representing the same class of languages, they have distinguishable learning biases. Intuition suggests that general TDNN's which include delays in hidden layers should perform well, compared to IDNN's, on problems in which the output can be expressed as a function on narrow input windows which repeat in time. On the other hand, these general TDNN's should perform poorly when the input windows are wide, or there is little repetition. We confirm these hypotheses via a set of simulations and statistical analysis.
AB - In this work, we characterize and contrast the capabilities of the general class of time-delay neural networks (TDNN's) with input delay neural networks (IDNN's), the subclass of TDNN's with delays limited to the inputs. Each class of networks is capable of representing the same set of languages, those embodied by the definite memory machines (DMM's), a subclass of finite-state machines. We demonstrate the close affinity between TDNN's and DMM languages by learning a very large DMM (2048 states) using only a few training examples. Even though both architectures are capable of representing the same class of languages, they have distinguishable learning biases. Intuition suggests that general TDNN's which include delays in hidden layers should perform well, compared to IDNN's, on problems in which the output can be expressed as a function on narrow input windows which repeat in time. On the other hand, these general TDNN's should perform poorly when the input windows are wide, or there is little repetition. We confirm these hypotheses via a set of simulations and statistical analysis.
UR - http://www.scopus.com/inward/record.url?scp=0031237543&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=0031237543&partnerID=8YFLogxK
U2 - 10.1109/72.623208
DO - 10.1109/72.623208
M3 - Article
C2 - 18255709
AN - SCOPUS:0031237543
SN - 1045-9227
VL - 8
SP - 1065
EP - 1070
JO - IEEE Transactions on Neural Networks
JF - IEEE Transactions on Neural Networks
IS - 5
ER -