Abstract
We propose a new learning method for building a general statistical inference engine, operating on discrete feature spaces. Such a model allows inference on any feature given values for the other features (or for a feature subset). Bayesian networks (BNs) are versatile tools that possess this inference capability. However, while the BN's explicit representation of conditional independencies is informative, this structure is not so easily learned. Typically, learning methods for BNs use (sub-optimal) greedy search techniques. There is also a difficult issue of overfitting in these models. Alternatively, in 1983 Cheeseman proposed finding the maximum entropy (ME) joint pmf consistent with arbitrary lower order probability constraints. This approach has some potential advantages over BNs. However, the huge complexity required for learning the joint pmf has severely limited the use of this approach until now. Here we propose an approximate ME method which also allows incorporation of arbitrary lower order constraints, but while retaining quite tractable learning complexity. The new method approximates the joint feature pmf (during learning) on a subgrid of the full feature space grid. Experimental results on the UC-Irvine repository reveal significant performance gains over two BN approaches: Chow and Liu's dependence trees and Herskovits and Cooper's Kutato. Several extensions of our approach are indicated.
| Original language | English (US) |
|---|---|
| Pages | 112-121 |
| Number of pages | 10 |
| State | Published - 1999 |
| Event | Proceedings of the 1999 9th IEEE Workshop on Neural Networks for Signal Processing (NNSP'99) - Madison, WI, USA Duration: Aug 23 1999 → Aug 25 1999 |
Other
| Other | Proceedings of the 1999 9th IEEE Workshop on Neural Networks for Signal Processing (NNSP'99) |
|---|---|
| City | Madison, WI, USA |
| Period | 8/23/99 → 8/25/99 |
All Science Journal Classification (ASJC) codes
- Signal Processing
- Software
- Electrical and Electronic Engineering