Abstract
The development and evaluation of two novel nonlinear reinforcement schemes for learning automata are presented. These schemes are designed to increase the rate of adaptation of the existing LR-P schemes while interacting with nonstationary environments. The first of these two schemes is called a nonlinear scheme incorporating history (NSIH) and the second a nonlinear scheme with unstable zones (NSWUZ). The prime objective of these algorithms is to reduce the number of iterations needed for the action probability vector to reach the desired level of accuracy rather than converge to a specific unit vector in the Cartesian coordinate. Simulation experiments have been conducted to assess the learning properties of NSIH and NSWUZ in nonstationary environments. The simulation results show that the proposed nonlinear algorithms respond to environmental changes faster than the LR-P scheme.
Original language | English (US) |
---|---|
Pages (from-to) | 2204-2207 |
Number of pages | 4 |
Journal | Proceedings of the IEEE Conference on Decision and Control |
Volume | 4 |
State | Published - 1990 |
Event | Proceedings of the 29th IEEE Conference on Decision and Control Part 6 (of 6) - Honolulu, HI, USA Duration: Dec 5 1990 → Dec 7 1990 |
All Science Journal Classification (ASJC) codes
- Control and Systems Engineering
- Modeling and Simulation
- Control and Optimization