Tests commonly used to determine progress in reading skill and proficiency typically assess reading products (e.g., identifying characters or factual information, sequencing events) rather than the reading processes used to generate responses (e.g., hypothesizing, evaluating, monitoring, questioning). Yet, effective processing often determines how successfully a reader responds on testing measures. Identifying measures that can assist educators to better understand how a student processes text is vital. The purpose of this research was to compare the data generated from 4 assessment methods used to evaluate how readers process text: think-aloud, interview, error detection, and questionnaire. In this descriptive study, 40 fourth-grade students of average reading ability read the same text, and data were collected as each student responded to 1 of the 4 assessment measures. Results indicated that students assessed with think-aloud and interview measures generated a greater number and broader range of text processing responses. Think-aloud protocols reflected a close interaction with text while interview responses included more evidence of metacognitive processing. Issues frequently identified as problematic in error detection research (e.g., difficulty finding errors, purpose for reading) are supported in this study. The questionnaires provide less specific data about individual text processing since students were limited by the answer choices for each question. Results from this study suggest that by using think-aloud and interview assessments, educators can obtain a more complete understanding of a reader's text processing skill than by using error-detection and questionnaire methods.
All Science Journal Classification (ASJC) codes
- Language and Linguistics
- Linguistics and Language