Comparing several human and computer-based methods for scoring concept maps and essays

Research output: Contribution to journalArticlepeer-review

24 Scopus citations

Abstract

This article reports the results of an investigation of the convergent criterionrelated validity of two computer-based tools for scoring concept maps and essays as part of the ongoing formative evaluation of these tools. In pairs, participants researched a science topic online and created a concept map of the topic. Later, participants individually wrote a short essay from their concept map. The concept maps and essays were scored by the computerbased tools and by human raters using rubrics. Computer-based concept map scores were a very good measure of the qualitative aspects of the concept maps (r = 0.84) and were an adequate measure of the quantitative aspects (r = 0.65). Also, the computer-based essay scores were an adequate measure of essay content (r = 0.71). If computer-based approaches for scoring concept maps and essays can provide a valid, low-cost, easy to use, and easy to interpret measure of students' content knowledge, then these approaches will likely gain rapid acceptance by teachers at all levels.

Original languageEnglish (US)
Pages (from-to)227-239
Number of pages13
JournalJournal of Educational Computing Research
Volume32
Issue number3
DOIs
StatePublished - 2005

All Science Journal Classification (ASJC) codes

  • Education
  • Computer Science Applications

Fingerprint

Dive into the research topics of 'Comparing several human and computer-based methods for scoring concept maps and essays'. Together they form a unique fingerprint.

Cite this