Developing robots that recognize when they are being trusted

Research output: Chapter in Book/Report/Conference proceedingConference contribution

5 Scopus citations

Abstract

In previous work we presented a computational framework that allows a robot or agent to reason about whether it should trust an interactive partner or whether the interactive partner trusts the robot (Wagner & Arkin, 2011). This article examines the use of this framework in a well-known situation for examining trust-the Investor-Trustee game (King-Casas, Tomlin, Anen, Camerer, Quartz, & Montague, 2005). Our experiment pits the robot against a person in this game and explores the impact of recognizing and responding to trust signals. Our results demonstrate that the recognition that a person has intentionally placed themselves at risk allows the robot to reciprocate and, by doing so, improve both individuals play in the game. This work has implications for home healthcare, search and rescue, and military applications.

Original languageEnglish (US)
Title of host publicationTrust and Autonomous Systems - Papers from the AAAI Spring Symposium, Technical Report
Pages84-89
Number of pages6
StatePublished - Sep 9 2013
Event2013 AAAI Spring Symposium - Palo Alto, CA, United States
Duration: Mar 25 2013Mar 27 2013

Publication series

NameAAAI Spring Symposium - Technical Report
VolumeSS-13-07

Other

Other2013 AAAI Spring Symposium
Country/TerritoryUnited States
CityPalo Alto, CA
Period3/25/133/27/13

All Science Journal Classification (ASJC) codes

  • Artificial Intelligence

Fingerprint

Dive into the research topics of 'Developing robots that recognize when they are being trusted'. Together they form a unique fingerprint.

Cite this