An approach to designing computer-based evaluation of student constructed responses: Effects on achievement and instructional time

William J. Gibbs, Kyle L. Peck

Research output: Contribution to journalArticlepeer-review

5 Scopus citations

Abstract

THIS INQUIRY examined the effectiveness of self-evaluation strategies to supplement computerized evaluation of constructed response answers. Additionally, self-evaluation was looked at as a means to improve learner recall of factual and comprehensive knowledge. The study compared the effects of five constructed response answer evaluation strategies on achievement and instructional time during computer-based learning. The five strategies were: 1) computerized evaluation only, 2) student evaluation only, 3) computerized evaluation and student evaluation, 4) student evaluation with required elaboration, and 5) computer and student evaluation with elaboration following conflicting evaluations. Analysis of the collected data revealed that achievement, as measured in this study, was unaffected by evaluation strategy. Accordingly, treatments did not affect student evaluation of responses. Across all self-evaluation groups, student evaluation did not differ substantially from expert evaluation, which may indicate that students can accurately evaluate their own work. The treatment strategies did differentially affect instructional time, with instructional time increasing as the level of interaction with the instructional software increased. Implications for the design of instructional software are discussed.

Original languageEnglish (US)
Pages (from-to)99-119
Number of pages21
JournalJournal of Computing in Higher Education
Volume6
Issue number2
DOIs
StatePublished - Mar 1995

All Science Journal Classification (ASJC) codes

  • Education

Fingerprint

Dive into the research topics of 'An approach to designing computer-based evaluation of student constructed responses: Effects on achievement and instructional time'. Together they form a unique fingerprint.

Cite this