Certifiable trust in autonomous systems: Making the intractable tangible

Joseph B. Lyons, Matthew A. Clark, Alan R. Wagner, Matthew J. Schuelke

Research output: Contribution to journalArticlepeer-review

24 Scopus citations

Abstract

Advances in robotics may reshape the landscape of daily life, yet those in the military have been part of the robotics revolution for some time now. One cannot traverse far within military echelons nor listen to the popular press without hearing planning, discussion, and for some, a great deal of concern regarding the military's latest push toward autonomous systems. The military's use of drones (uninhabited aerial systems, or UASs) has been a ubiquitous topic of discussion/criticism within the popular media for several years since their highly publicized use in regions such as Pakistan, Yemen, and Afghanistan. Much of the chagrin surrounding these systems, despite the fact they are currently teleoperated with human oversight and command, has to do with whether or not we can or should trust them in a combat environment. Robotic systems within the military may be operated in hostile, complex situations and may, someday, be given the authority to execute lethal decisions within the battle space (Arkin 2009). However, future concept of operations (CONOPS) will likely inject greater autonomy for understanding the trust dynamics that exist between humans and machines. As will be discussed in this article, the challenge of understanding these trust dynamics is more complicated than simply increasing the system's reliability.into these systems that will ultimately increase the need.

Original languageEnglish (US)
Pages (from-to)37-49
Number of pages13
JournalAI Magazine
Volume38
Issue number3
DOIs
StatePublished - Sep 1 2017

All Science Journal Classification (ASJC) codes

  • Artificial Intelligence

Fingerprint

Dive into the research topics of 'Certifiable trust in autonomous systems: Making the intractable tangible'. Together they form a unique fingerprint.

Cite this