Abstract
Dishonesty is found throughout normal interpersonal interaction. A lie is a specific type of dishonesty. A commonly accepted definition of the term “lie” is a false statement made by an individual which knows that the statement is not true (Carson 2006). This chapter explores the computational and social-psychological underpinnings that enable a robot to utter lies using a framework that we already applied to non-verbal deception. We use the interdependence framework as the foundation for analyzing various types of lies, because it provides conceptual tools for understanding the role of the situation and the robot’s disposition in determining whether or not to lie. The results of two tests of a robot playing a card game with a human support our hypotheses that 1) the interdependence framework can be applied to lying; 2) the application of this framework provides a basis for understanding factors that shape someone’s decision to lie; and 3) an individual’s history influences their decision to lie. Our findings also demonstrate that stereotyped partner models can be used to bootstrap a robot’s evaluation of the costs and benefits of lying as well as the likelihood that an individual will challenge the truth of the robot’s statements. We conclude the chapter with suggestions for future work.
Original language | English (US) |
---|---|
Title of host publication | Robots that Talk and Listen |
Subtitle of host publication | Technology and Social Impact |
Publisher | Walter de Gruyter GmbH |
Pages | 203-225 |
Number of pages | 23 |
ISBN (Electronic) | 9781614514404 |
ISBN (Print) | 9781614516033 |
DOIs | |
State | Published - Jan 1 2015 |
All Science Journal Classification (ASJC) codes
- General Engineering
- General Computer Science
- General Arts and Humanities
- General Social Sciences