Designs for explaining intelligent agents

Steven R. Haynes, Mark A. Cohen, Frank E. Ritter

Research output: Contribution to journalArticlepeer-review

48 Scopus citations


Explanation is an important capability for usable intelligent systems, including intelligent agents and cognitive models embedded within simulations and other decision support systems. Explanation facilities help users understand how and why an intelligent system possesses a given structure and set of behaviors. Prior research has resulted in a number of approaches to provide explanation capabilities and identified some significant challenges. We describe designs that can be reused to create intelligent agents capable of explaining themselves. The designs include ways to provide ontological, mechanistic, and operational explanations. These designs inscribe lessons learned from prior research and provide guidance for incorporating explanation facilities into intelligent systems. The designs are derived from both prior research on explanation tool design and from the empirical study reported here on the questions users ask when working with an intelligent system. We demonstrate the use of these designs through examples implemented using the Herbal high-level cognitive modeling language. These designs can help build better agents-they support creating more usable and more affordable intelligent agents by encapsulating prior knowledge about how to generate explanations in concise representations that can be instantiated or adapted by agent developers.

Original languageEnglish (US)
Pages (from-to)90-110
Number of pages21
JournalInternational Journal of Human Computer Studies
Issue number1
StatePublished - Jan 2009

All Science Journal Classification (ASJC) codes

  • Software
  • Human Factors and Ergonomics
  • Education
  • General Engineering
  • Human-Computer Interaction
  • Hardware and Architecture


Dive into the research topics of 'Designs for explaining intelligent agents'. Together they form a unique fingerprint.

Cite this