Abstract
We propose a new learning algorithm for regression modeling. The method is especially suitable for optimizing neural network structures that are amenable to a statistical description as mixture models. These include mixture of experts, hierarchical mixture of experts (HME), and normalized radial basis functions (NRBF). Unlike recent maximum likelihood (ML) approaches, we directly minimize the (squared) regression error. We use the probabilistic framework as means to define an optimization method that avoids many shallow local minima on the complex cost surface. Our method is based on deterministic annealing (DA), where the entropy of the system is gradually reduced, with the expected regression cost (energy) minimized at each entropy level. The corresponding Lagrangian is the system's "free-energy," and this annealing process is controlled by variation of the Lagrange multiplier, which acts as a "temperature" parameter. The new method consistently and substantially outperformed the competing methods for training NRBF and HME regression functions over a variety of benchmark regression examples.
Original language | English (US) |
---|---|
Pages (from-to) | 2811-2820 |
Number of pages | 10 |
Journal | IEEE Transactions on Signal Processing |
Volume | 45 |
Issue number | 11 |
DOIs | |
State | Published - 1997 |
All Science Journal Classification (ASJC) codes
- Signal Processing
- Electrical and Electronic Engineering