TY - JOUR
T1 - An approach to designing computer-based evaluation of student constructed responses
T2 - Effects on achievement and instructional time
AU - Gibbs, William J.
AU - Peck, Kyle L.
PY - 1995/3
Y1 - 1995/3
N2 - THIS INQUIRY examined the effectiveness of self-evaluation strategies to supplement computerized evaluation of constructed response answers. Additionally, self-evaluation was looked at as a means to improve learner recall of factual and comprehensive knowledge. The study compared the effects of five constructed response answer evaluation strategies on achievement and instructional time during computer-based learning. The five strategies were: 1) computerized evaluation only, 2) student evaluation only, 3) computerized evaluation and student evaluation, 4) student evaluation with required elaboration, and 5) computer and student evaluation with elaboration following conflicting evaluations. Analysis of the collected data revealed that achievement, as measured in this study, was unaffected by evaluation strategy. Accordingly, treatments did not affect student evaluation of responses. Across all self-evaluation groups, student evaluation did not differ substantially from expert evaluation, which may indicate that students can accurately evaluate their own work. The treatment strategies did differentially affect instructional time, with instructional time increasing as the level of interaction with the instructional software increased. Implications for the design of instructional software are discussed.
AB - THIS INQUIRY examined the effectiveness of self-evaluation strategies to supplement computerized evaluation of constructed response answers. Additionally, self-evaluation was looked at as a means to improve learner recall of factual and comprehensive knowledge. The study compared the effects of five constructed response answer evaluation strategies on achievement and instructional time during computer-based learning. The five strategies were: 1) computerized evaluation only, 2) student evaluation only, 3) computerized evaluation and student evaluation, 4) student evaluation with required elaboration, and 5) computer and student evaluation with elaboration following conflicting evaluations. Analysis of the collected data revealed that achievement, as measured in this study, was unaffected by evaluation strategy. Accordingly, treatments did not affect student evaluation of responses. Across all self-evaluation groups, student evaluation did not differ substantially from expert evaluation, which may indicate that students can accurately evaluate their own work. The treatment strategies did differentially affect instructional time, with instructional time increasing as the level of interaction with the instructional software increased. Implications for the design of instructional software are discussed.
UR - http://www.scopus.com/inward/record.url?scp=0005475403&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=0005475403&partnerID=8YFLogxK
U2 - 10.1007/BF02941040
DO - 10.1007/BF02941040
M3 - Article
AN - SCOPUS:0005475403
SN - 1042-1726
VL - 6
SP - 99
EP - 119
JO - Journal of Computing in Higher Education
JF - Journal of Computing in Higher Education
IS - 2
ER -