TY - JOUR
T1 - Automated Scoring of Scientific Creativity in German
AU - Goecke, Benjamin
AU - DiStefano, Paul V.
AU - Aschauer, Wolfgang
AU - Haim, Kurt
AU - Beaty, Roger
AU - Forthmann, Boris
N1 - Publisher Copyright:
© 2024 The Authors. The Journal of Creative Behavior published by Wiley Periodicals LLC on behalf of Creative Education Foundation (CEF).
PY - 2024/9
Y1 - 2024/9
N2 - Automated scoring is a current hot topic in creativity research. However, most research has focused on the English language and popular verbal creative thinking tasks, such as the alternate uses task. Therefore, in this study, we present a large language model approach for automated scoring of a scientific creative thinking task that assesses divergent ideation in experimental tasks in the German language. Participants are required to generate alternative explanations for an empirical observation. This work analyzed a total of 13,423 unique responses. To predict human ratings of originality, we used XLM-RoBERTa (Cross-lingual Language Model-RoBERTa), a large, multilingual model. The prediction model was trained on 9,400 responses. Results showed a strong correlation between model predictions and human ratings in a held-out test set (n = 2,682; r = 0.80; CI-95% [0.79, 0.81]). These promising findings underscore the potential of large language models for automated scoring of scientific creative thinking in the German language. We encourage researchers to further investigate automated scoring of other domain-specific creative thinking tasks.
AB - Automated scoring is a current hot topic in creativity research. However, most research has focused on the English language and popular verbal creative thinking tasks, such as the alternate uses task. Therefore, in this study, we present a large language model approach for automated scoring of a scientific creative thinking task that assesses divergent ideation in experimental tasks in the German language. Participants are required to generate alternative explanations for an empirical observation. This work analyzed a total of 13,423 unique responses. To predict human ratings of originality, we used XLM-RoBERTa (Cross-lingual Language Model-RoBERTa), a large, multilingual model. The prediction model was trained on 9,400 responses. Results showed a strong correlation between model predictions and human ratings in a held-out test set (n = 2,682; r = 0.80; CI-95% [0.79, 0.81]). These promising findings underscore the potential of large language models for automated scoring of scientific creative thinking in the German language. We encourage researchers to further investigate automated scoring of other domain-specific creative thinking tasks.
UR - http://www.scopus.com/inward/record.url?scp=85192904539&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=85192904539&partnerID=8YFLogxK
U2 - 10.1002/jocb.658
DO - 10.1002/jocb.658
M3 - Article
AN - SCOPUS:85192904539
SN - 0022-0175
VL - 58
SP - 321
EP - 327
JO - Journal of Creative Behavior
JF - Journal of Creative Behavior
IS - 3
ER -