Abstract
To attain improved human-machine collaboration, it is necessary for autonomous systems to infer human trust and workload and respond accordingly. In turn, autonomous systems require models that capture both human trust and workload dynamics. In a companion paper, we developed a trust-workload partially observable Markov decision process (POMDP) model framework that captured changes in human trust and workload for contexts that involve interaction between a human and an intelligent decision-aid system. In this paper, we define intuitive reward functions and show that these can be readily transformed for integration with the proposed POMDP model. We synthesize a near-optimal control policy using transparency as the feedback variable based on solutions for two cases: 1) increasing human trust and reducing workload, and 2) improving overall performance along with the aforementioned objectives for trust and workload. We implement these solutions in a reconnaissance mission study in which human subjects are aided by a virtual robotic assistant in completing a series of missions. We show that it is not always beneficial to aim to improve trust; instead, the control objective should be to optimize a context-specific performance objective when designing intelligent decision-aid systems that influence trust-workload behavior.
Original language | English (US) |
---|---|
Pages (from-to) | 322-328 |
Number of pages | 7 |
Journal | IFAC-PapersOnLine |
Volume | 51 |
Issue number | 34 |
DOIs | |
State | Published - Jan 1 2019 |
All Science Journal Classification (ASJC) codes
- Control and Systems Engineering