Abstract
This proof-of-concept investigation describes a computer-based approach for deriving the knowledge structure of individuals and of groups from their written essays, and considers the convergent criterion-related validity of the computer-based scores relative to human rater essay scores and multiple-choice test scores. After completing a classroom-based course, undergraduate participants in a sophomore-level management course completed a 100-item multiple-choice final examination and then answered an extended-response essay question comparing four management theories. The essays were quantified with ALA-Reader software applying both sentencewise and linear lexical aggregate approaches, and then analyzed with Pathfinder KNOT software. The linear aggregate approach was a better measure of essay content structure relative to the sentence-wise approach, with significant Spearman correlations of 0.60 and 0.45 with the human rater essay scores. The group network representations of low and high performing students were reasonable and straightforward to interpret, with the high group being more similar to the expert, and the low and high groups more similar to each other than to the expert. Suggestions for further research are provided.
Original language | English (US) |
---|---|
Pages (from-to) | 211-227 |
Number of pages | 17 |
Journal | Journal of Educational Computing Research |
Volume | 37 |
Issue number | 3 |
DOIs | |
State | Published - 2007 |
All Science Journal Classification (ASJC) codes
- Education
- Computer Science Applications