Improving Human-Machine Collaboration Through Transparency-based Feedback – Part II: Control Design and Synthesis

Kumar Akash, Tahira Reid, Neera Jain

Research output: Contribution to journalArticlepeer-review

24 Scopus citations

Abstract

To attain improved human-machine collaboration, it is necessary for autonomous systems to infer human trust and workload and respond accordingly. In turn, autonomous systems require models that capture both human trust and workload dynamics. In a companion paper, we developed a trust-workload partially observable Markov decision process (POMDP) model framework that captured changes in human trust and workload for contexts that involve interaction between a human and an intelligent decision-aid system. In this paper, we define intuitive reward functions and show that these can be readily transformed for integration with the proposed POMDP model. We synthesize a near-optimal control policy using transparency as the feedback variable based on solutions for two cases: 1) increasing human trust and reducing workload, and 2) improving overall performance along with the aforementioned objectives for trust and workload. We implement these solutions in a reconnaissance mission study in which human subjects are aided by a virtual robotic assistant in completing a series of missions. We show that it is not always beneficial to aim to improve trust; instead, the control objective should be to optimize a context-specific performance objective when designing intelligent decision-aid systems that influence trust-workload behavior.

Original languageEnglish (US)
Pages (from-to)322-328
Number of pages7
JournalIFAC-PapersOnLine
Volume51
Issue number34
DOIs
StatePublished - Jan 1 2019

All Science Journal Classification (ASJC) codes

  • Control and Systems Engineering

Fingerprint

Dive into the research topics of 'Improving Human-Machine Collaboration Through Transparency-based Feedback – Part II: Control Design and Synthesis'. Together they form a unique fingerprint.

Cite this