TY - GEN
T1 - An Information-theoretic Learning Algorithm for Neural Network Classification
AU - Miller, David J.
AU - Rao, Ajit
AU - Rose, Kenneth
AU - Gersho, Allen
N1 - Publisher Copyright:
© 1995 Neural information processing systems foundation. All rights reserved.
PY - 1995
Y1 - 1995
N2 - A new learning algorithm is developed for the design of statistical classifiers minimizing the rate of misclassification. The method, which is based on ideas from information theory and analogies to statistical physics, assigns data to classes in probability. The distributions are chosen to minimize the expected classification error while simultaneously enforcing the classifier's structure and a level of "randomness" measured by Shannon's entropy. Achievement of the classifier structure is quantified by an associated cost. The constrained optimization problem is equivalent to the minimization of a Helmholtz free energy, and the resulting optimization method is a basic extension of the deterministic annealing algorithm that explicitly enforces structural constraints on assignments while reducing the entropy and expected cost with temperature. In the limit of low temperature, the error rate is minimized directly and a hard classifier with the requisite structure is obtained. This learning algorithm can be used to design a variety of classifier structures. The approach is compared with standard methods for radial basis function design and is demonstrated to substantially outperform other design methods on several benchmark examples, while often retaining design complexity comparable to, or only moderately greater than that of strict descent-based methods.
AB - A new learning algorithm is developed for the design of statistical classifiers minimizing the rate of misclassification. The method, which is based on ideas from information theory and analogies to statistical physics, assigns data to classes in probability. The distributions are chosen to minimize the expected classification error while simultaneously enforcing the classifier's structure and a level of "randomness" measured by Shannon's entropy. Achievement of the classifier structure is quantified by an associated cost. The constrained optimization problem is equivalent to the minimization of a Helmholtz free energy, and the resulting optimization method is a basic extension of the deterministic annealing algorithm that explicitly enforces structural constraints on assignments while reducing the entropy and expected cost with temperature. In the limit of low temperature, the error rate is minimized directly and a hard classifier with the requisite structure is obtained. This learning algorithm can be used to design a variety of classifier structures. The approach is compared with standard methods for radial basis function design and is demonstrated to substantially outperform other design methods on several benchmark examples, while often retaining design complexity comparable to, or only moderately greater than that of strict descent-based methods.
UR - https://www.scopus.com/pages/publications/105021333561
UR - https://www.scopus.com/pages/publications/105021333561#tab=citedBy
M3 - Conference contribution
AN - SCOPUS:105021333561
T3 - Advances in Neural Information Processing Systems
SP - 591
EP - 597
BT - Advances in Neural Information Processing Systems 8, NIPS 1995
A2 - Touretzky, D.
A2 - Mozer, M.C.
A2 - Hasselmo, M.
PB - Neural information processing systems foundation
T2 - 8th Advances in Neural Information Processing Systems, NIPS 1995
Y2 - 27 November 1995 through 30 November 1995
ER -