Abstract
We introduce and study methods of inserting synaptic noise into dynamically-driven recurrent neural networks and show that applying a controlled amount of noise during training may improve convergence and generalization. In addition, we analyze the effects of each noise parameter (additive vs. multiplicative, cumulative vs. non-cumulative, per time step vs. per string) and predict that best overall performance can be achieved by injecting additive noise at each time step. Extensive simulations on learning the dual parity grammar from temporal strings substantiate these predictions.
Original language | English (US) |
---|---|
Pages | 649-656 |
Number of pages | 8 |
State | Published - 1994 |
Event | 7th International Conference on Neural Information Processing Systems, NIPS 1994 - Denver, United States Duration: Jan 1 1994 → Jan 1 1994 |
Conference
Conference | 7th International Conference on Neural Information Processing Systems, NIPS 1994 |
---|---|
Country/Territory | United States |
City | Denver |
Period | 1/1/94 → 1/1/94 |
All Science Journal Classification (ASJC) codes
- Information Systems
- Signal Processing
- Computer Networks and Communications