Abstract
This paper presents a theoretical approach to determine the probability of misclassification of the multilayer perceptron (MLP) neural model, subject to weight errors. The type of applications considered are classification/recognition tasks involving binary input-output mappings. The analytical models are validated via simulation of a small illustrative example. The theoretical results, in agreement with simulation results, show that, for the example considered, Gaussian weight errors of standard deviation up to 22% of the weight value can be tolerated. The theoretical method developed here adds predictability to the fault tolerance capability of neural nets and shows that this capability is heavily dependent on the problem data.
Original language | English (US) |
---|---|
Pages (from-to) | 201-205 |
Number of pages | 5 |
Journal | IEEE Transactions on Neural Networks |
Volume | 7 |
Issue number | 1 |
DOIs | |
State | Published - 1996 |
All Science Journal Classification (ASJC) codes
- Software
- Computer Science Applications
- Computer Networks and Communications
- Artificial Intelligence