All Science Journal Classification (ASJC) codes
- Computer Vision and Pattern Recognition
- Computer Science Applications
Cite this
- APA
- Author
- BIBTEX
- Standard
- RIS
}
Proceedings -2015 12th Conference on Computer and Robot Vision, CRV 2015. Institute of Electrical and Electronics Engineers Inc., 2015. p. 155-160 7158334 (Proceedings -2015 12th Conference on Computer and Robot Vision, CRV 2015).
Research output: Chapter in Book/Report/Conference proceeding › Conference contribution
TY - GEN
T1 - Improved Threshold Selection by Using Calibrated Probabilities for Random Forest Classifiers
AU - Baumann, Florian
AU - Chen, Jinghui
AU - Vogt, Karsten
AU - Rosenhahn, Bodo
N1 - Publisher Copyright: © 2015 IEEE.
PY - 2015/7/14
Y1 - 2015/7/14
N2 - Random Forest is a well-known ensemble learning method that achieves high recognition accuracies while preserving a fast training procedure. To construct a Random Forest classifier, several decision trees are arranged in a forest while a majority voting leads to the final decision. In order to split each node of a decision tree into two children, several possible variables are randomly selected while a splitting criterion is computed for each of them. Using this pool of possible splits, the Random Forest algorithm selects the best variable according to the splitting criterion. Often, this splitting is not reliable leading to a reduced recognition accuracy. In this paper, we propose to introduce an additional condition for selecting the best variable leading to an improvement of the recognition accuracy, especially for a smaller number of trees. We enhance the standard threshold selection by a quality estimation that is computed using a calibration method. The proposed method is evaluated on datasets for machine learning as well as object recognition.
AB - Random Forest is a well-known ensemble learning method that achieves high recognition accuracies while preserving a fast training procedure. To construct a Random Forest classifier, several decision trees are arranged in a forest while a majority voting leads to the final decision. In order to split each node of a decision tree into two children, several possible variables are randomly selected while a splitting criterion is computed for each of them. Using this pool of possible splits, the Random Forest algorithm selects the best variable according to the splitting criterion. Often, this splitting is not reliable leading to a reduced recognition accuracy. In this paper, we propose to introduce an additional condition for selecting the best variable leading to an improvement of the recognition accuracy, especially for a smaller number of trees. We enhance the standard threshold selection by a quality estimation that is computed using a calibration method. The proposed method is evaluated on datasets for machine learning as well as object recognition.
UR - https://www.scopus.com/pages/publications/84943159205
UR - https://www.scopus.com/pages/publications/84943159205#tab=citedBy
U2 - 10.1109/CRV.2015.28
DO - 10.1109/CRV.2015.28
M3 - Conference contribution
AN - SCOPUS:84943159205
T3 - Proceedings -2015 12th Conference on Computer and Robot Vision, CRV 2015
SP - 155
EP - 160
BT - Proceedings -2015 12th Conference on Computer and Robot Vision, CRV 2015
PB - Institute of Electrical and Electronics Engineers Inc.
T2 - 12th Conference on Computer and Robot Vision, CRV 2015
Y2 - 3 June 2015 through 5 June 2015
ER -