TY - GEN
T1 - Answer-state Recurrent Relational Network (AsRRN) for Constructed Response Assessment and Feedback Grouping
AU - Li, Zhaohui
AU - Lloyd, Susan E.
AU - Beckman, Matthew D.
AU - Passonneau, Rebecca J.
N1 - Publisher Copyright:
© 2023 Association for Computational Linguistics.
PY - 2023
Y1 - 2023
N2 - STEM educators must trade off the ease of assessing selected response (SR) questions, like multiple choice, with constructed response (CR) questions, where students articulate their own reasoning. Our work addresses a CR type new to NLP but common in college STEM, consisting of multiple questions per context. To relate the context, the questions, the reference responses, and students' answers, we developed an Answer-state Recurrent Relational Network (AsRRN). In recurrent time-steps, relation vectors are learned for specific dependencies in a computational graph, where the nodes encode the distinct types of text input. AsRRN incorporates contrastive loss for better representation learning, which improves performance and supports student feedback. AsRRN was developed on a new dataset of 6,532 student responses to three, two-part CR questions. AsRRN outperforms classifiers based on LLMs, a previous relational network for CR questions, another graph neural network baseline, and few-shot learning with GPT-3.5. Ablation studies show the distinct contributions of AsRRN's dependency structure, the number of time steps in the recurrence, and the contrastive loss.
AB - STEM educators must trade off the ease of assessing selected response (SR) questions, like multiple choice, with constructed response (CR) questions, where students articulate their own reasoning. Our work addresses a CR type new to NLP but common in college STEM, consisting of multiple questions per context. To relate the context, the questions, the reference responses, and students' answers, we developed an Answer-state Recurrent Relational Network (AsRRN). In recurrent time-steps, relation vectors are learned for specific dependencies in a computational graph, where the nodes encode the distinct types of text input. AsRRN incorporates contrastive loss for better representation learning, which improves performance and supports student feedback. AsRRN was developed on a new dataset of 6,532 student responses to three, two-part CR questions. AsRRN outperforms classifiers based on LLMs, a previous relational network for CR questions, another graph neural network baseline, and few-shot learning with GPT-3.5. Ablation studies show the distinct contributions of AsRRN's dependency structure, the number of time steps in the recurrence, and the contrastive loss.
UR - http://www.scopus.com/inward/record.url?scp=85183303361&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=85183303361&partnerID=8YFLogxK
M3 - Conference contribution
AN - SCOPUS:85183303361
T3 - Findings of the Association for Computational Linguistics: EMNLP 2023
SP - 3879
EP - 3891
BT - Findings of the Association for Computational Linguistics
PB - Association for Computational Linguistics (ACL)
T2 - 2023 Findings of the Association for Computational Linguistics: EMNLP 2023
Y2 - 6 December 2023 through 10 December 2023
ER -