CAREER: No Time to Explain: Developing Robots that Actively Prevent Overtrust during Emergencies

Project: Research project

Project Details

Description

For many applications, to safely use a robot a user must know when and how much to trust the robot. Yet, research has shown that people tend to trust robots and automation too much. This puts them at risk because they have unrealistic expectations about what a robot sees, knows, and can do. The overall goal of this project is to develop robots that can help people correctly calibrate their trust in the robot. The project looks at this problem in the context of robot-guided emergency evacuation with the goal that robots stationed inside of buildings can serve as instantaneous first responders helping people safely evacuate during an emergency. The project will lay the groundwork for a career long research program devoted to understanding the causes that underlie over-trust and predicting situations that may induce people to accept too much risk. In addition to ensuring safe evacuation, this work will also foster the safe deployment of autonomous vehicles and healthcare robots. The safe use of robots is also an ethics question. This project therefore proposes a wide range of educational and outreach objectives focused on using robots to: (1) keep students safe in schools; (2) expand and develop robot ethics-related curricula; and (3) educate those in need.

This project concentrates on three objectives: 1) understanding the situational, experiential, and robot-related factors that lead people to trust a robot too much; 2) perceptually recognizing when a person is trusting a robot too much; and 3) generating robot conveyed commands and explanations to inform and engage an evacuee. The first objective examines how situation ambiguity and recent experiences with a robot influence trust. This objective will generate data that will be used to create methods allowing a robot to predict the likelihood that a person's trust is mis-calibrated. The second objective will produce a suite of new perceptual methods allowing a robot to evaluate how much the person is trusting the robot and to detect if the person is trusting the robot too much. The final objective will investigate whether and how robot generated commands and explanations can be used to improve trust calibration during a robot-guided evacuation, developing behaviors that allow a robot to assert authority and provide explanations.

This award reflects NSF's statutory mission and has been deemed worthy of support through evaluation using the Foundation's intellectual merit and broader impacts review criteria.

StatusActive
Effective start/end date6/1/215/31/26

Funding

  • National Science Foundation: $192,414.00

Fingerprint

Explore the research topics touched on by this project. These labels are generated based on the underlying awards/grants. Together they form a unique fingerprint.