Prompting, feedback and error correction in the design of a scenario machine

John Carroll, Dana S. Kay

Research output: Contribution to journalArticlepeer-review

22 Scopus citations

Abstract

A scenario machine limits the user to a single action path through system functions and procedures. Four scenario machines were designed to embody different approaches to prompting, feedback, and automatic error correction for a “learning-by-doing” training simulator for a commercial, menu-based word processor. Compared with users trained directly on the commercial system, scenario machine users demonstrated an overall advantage in the “getting started” stage of learning. Initial training on a “prompting + automatic correction” system was particularly efficient, encouraging a DWIM (or “do what I mean”) approach to training system design. Curiously, training on a “prompting + feedback” system led to relatively impaired performance on a set of transfer of learning tasks. It was suggested that too much training information support may obscure the task coherence of the action scenario itself relative to a design that provides less explicit direction.

Original languageEnglish (US)
Pages (from-to)11-27
Number of pages17
JournalInternational Journal of Man-Machine Studies
Volume28
Issue number1
DOIs
StatePublished - Jan 1 1988

All Science Journal Classification (ASJC) codes

  • General Engineering

Fingerprint

Dive into the research topics of 'Prompting, feedback and error correction in the design of a scenario machine'. Together they form a unique fingerprint.

Cite this