Automated Scoring of Scientific Creativity in German

Benjamin Goecke, Paul V. DiStefano, Wolfgang Aschauer, Kurt Haim, Roger Beaty, Boris Forthmann

Research output: Contribution to journalArticlepeer-review

2 Scopus citations

Abstract

Automated scoring is a current hot topic in creativity research. However, most research has focused on the English language and popular verbal creative thinking tasks, such as the alternate uses task. Therefore, in this study, we present a large language model approach for automated scoring of a scientific creative thinking task that assesses divergent ideation in experimental tasks in the German language. Participants are required to generate alternative explanations for an empirical observation. This work analyzed a total of 13,423 unique responses. To predict human ratings of originality, we used XLM-RoBERTa (Cross-lingual Language Model-RoBERTa), a large, multilingual model. The prediction model was trained on 9,400 responses. Results showed a strong correlation between model predictions and human ratings in a held-out test set (n = 2,682; r = 0.80; CI-95% [0.79, 0.81]). These promising findings underscore the potential of large language models for automated scoring of scientific creative thinking in the German language. We encourage researchers to further investigate automated scoring of other domain-specific creative thinking tasks.

Original languageEnglish (US)
Pages (from-to)321-327
Number of pages7
JournalJournal of Creative Behavior
Volume58
Issue number3
DOIs
StatePublished - Sep 2024

All Science Journal Classification (ASJC) codes

  • Education
  • Developmental and Educational Psychology
  • Visual Arts and Performing Arts

Fingerprint

Dive into the research topics of 'Automated Scoring of Scientific Creativity in German'. Together they form a unique fingerprint.

Cite this