TY - GEN
T1 - Learning When to Defer to Humans for Short Answer Grading
AU - Li, Zhaohui
AU - Zhang, Chengning
AU - Jin, Yumi
AU - Cang, Xuesong
AU - Puntambekar, Sadhana
AU - Passonneau, Rebecca J.
N1 - Publisher Copyright:
© 2023, The Author(s), under exclusive license to Springer Nature Switzerland AG.
PY - 2023
Y1 - 2023
N2 - To assess student knowledge, educators face a tradeoff between open-ended versus fixed-response questions. Open-ended questions are easier to formulate, and provide greater insight into student learning, but are burdensome. Machine learning methods that could reduce the assessment burden also have a cost, given that large datasets of reliably assessed examples (labeled data) are required for training and testing. We address the human costs of assessment and data labeling using selective prediction, where the output of a machine learned model is used when the model makes a confident decision, but otherwise the model defers to a human decision-maker. The goal is to defer less often while maintaining human assessment quality on the total output. We refer to the deferral criteria as a deferral policy, and we show it is possible to learn when to defer. We first trained an autograder on a combination of historical data and a small amount of newly labeled data, achieving moderate performance. We then used the autograder output as input to a logistic regression to learn when to defer. The learned logistic regression equation constitutes a deferral policy. Tests of the selective prediction method on a held out test set showed that human-level assessment quality can be achieved with a major reduction of human effort.
AB - To assess student knowledge, educators face a tradeoff between open-ended versus fixed-response questions. Open-ended questions are easier to formulate, and provide greater insight into student learning, but are burdensome. Machine learning methods that could reduce the assessment burden also have a cost, given that large datasets of reliably assessed examples (labeled data) are required for training and testing. We address the human costs of assessment and data labeling using selective prediction, where the output of a machine learned model is used when the model makes a confident decision, but otherwise the model defers to a human decision-maker. The goal is to defer less often while maintaining human assessment quality on the total output. We refer to the deferral criteria as a deferral policy, and we show it is possible to learn when to defer. We first trained an autograder on a combination of historical data and a small amount of newly labeled data, achieving moderate performance. We then used the autograder output as input to a logistic regression to learn when to defer. The learned logistic regression equation constitutes a deferral policy. Tests of the selective prediction method on a held out test set showed that human-level assessment quality can be achieved with a major reduction of human effort.
UR - http://www.scopus.com/inward/record.url?scp=85164922366&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=85164922366&partnerID=8YFLogxK
U2 - 10.1007/978-3-031-36272-9_34
DO - 10.1007/978-3-031-36272-9_34
M3 - Conference contribution
AN - SCOPUS:85164922366
SN - 9783031362712
T3 - Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)
SP - 414
EP - 425
BT - Artificial Intelligence in Education - 24th International Conference, AIED 2023, Proceedings
A2 - Wang, Ning
A2 - Rebolledo-Mendez, Genaro
A2 - Matsuda, Noboru
A2 - Santos, Olga C.
A2 - Dimitrova, Vania
PB - Springer Science and Business Media Deutschland GmbH
T2 - 24th International Conference on Artificial Intelligence in Education, AIED 2023
Y2 - 3 July 2023 through 7 July 2023
ER -