TY - JOUR
T1 - Technical efficiency-based selection of learning cases to improve forecasting accuracy of neural networks under monotonicity assumption
AU - Pendharkar, Parag C.
AU - Rodger, James A.
N1 - Funding Information:
The authors acknowledge financial support for this study from the Central Research Development Fund, Small Grants Program, University of Pittsburgh. We thank graduate student David Welliver for identifying relevant neural network literature and the anonymous reviewers for very valuable comments.
Copyright:
Copyright 2004 Elsevier Science B.V., Amsterdam. All rights reserved.
PY - 2003/9
Y1 - 2003/9
N2 - In this paper, we show that when an artificial neural network (ANN) model is used for learning monotonic forecasting functions, it may be useful to screen training data so the screened examples approximately satisfy the monotonicity property. We show how a technical efficiency-based ranking, using the data envelopment analysis (DEA) model, and a predetermined threshold efficiency, might be useful to screen training data so that a subset of examples that approximately satisfy the monotonicity property can be identified. Using a health care forecasting problem, the monotonicity assumption, and a predetermined threshold efficiency level, we use DEA to split training data into two mutually exclusive, "efficient" and "inefficient", training data subsets. We compare the performance of the ANN by using the "efficient" and "inefficient" training data subsets. Our results indicate that the predictive performance of an ANN that is trained on the "efficient" training data subset is higher than the predictive performance of an ANN that is trained on the "inefficient" training data subset.
AB - In this paper, we show that when an artificial neural network (ANN) model is used for learning monotonic forecasting functions, it may be useful to screen training data so the screened examples approximately satisfy the monotonicity property. We show how a technical efficiency-based ranking, using the data envelopment analysis (DEA) model, and a predetermined threshold efficiency, might be useful to screen training data so that a subset of examples that approximately satisfy the monotonicity property can be identified. Using a health care forecasting problem, the monotonicity assumption, and a predetermined threshold efficiency level, we use DEA to split training data into two mutually exclusive, "efficient" and "inefficient", training data subsets. We compare the performance of the ANN by using the "efficient" and "inefficient" training data subsets. Our results indicate that the predictive performance of an ANN that is trained on the "efficient" training data subset is higher than the predictive performance of an ANN that is trained on the "inefficient" training data subset.
UR - http://www.scopus.com/inward/record.url?scp=0037911566&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=0037911566&partnerID=8YFLogxK
U2 - 10.1016/S0167-9236(02)00138-0
DO - 10.1016/S0167-9236(02)00138-0
M3 - Article
AN - SCOPUS:0037911566
SN - 0167-9236
VL - 36
SP - 117
EP - 136
JO - Decision Support Systems
JF - Decision Support Systems
IS - 1
ER -