Ethical decision-making is complex; and it is especially difficult to incorporate into autonomous systems. This project is focused on creating and testing an architecture for a robot that allows the system to be adaptable and ethical in its decision-making in complex situations. Since these systems may be called upon to adapt to evolving norms and cultural expectations, the development of an architecture that incorporates a range of philosophical theories and perspectives is crucially important. The robot's decisions will therefore be guided by multiple ethical frameworks and a simulated moral emotional state when they are tasked with making an ethical choice. Findings from the project could influence the design of countless types of robots including robot teachers, greeters, therapists, and companions. Thus, this research has the potential to impact large sections of society, including vulnerable populations such as individuals with disabilities that may interact with future robots.
The different underlying computational architectures will be expanded to reason in multiple ways according to different ethical frameworks; the selection of the framework and resulting action will depend upon the moral emotional state of the agent. Moral emotions can help to guide decision making when conflicting choices are available as recommended by competing or cooperating ethical frameworks. The robot models the human subject's moral emotional state to choose an ethical action (from a set of competing actions derived from different ethical frameworks) to select an action that best conforms to the human's expectations, existing biases, and the situational context itself. Thus, the researchers will develop an action selection mechanism that recognizes the moral emotional state of the overall situation, and use it as a means for selecting the appropriate ethical action to undertake in the face of conflicting choices. Specifically this research will investigate the appropriateness of the application of other-deception (deception intended to benefit the human being deceived by the robot) dependent upon the subject?s emotional state (e.g., shame, embarrassment, guilt, or empathy) and the specific task context to determine whether to have the robot deceive the subject or not based on these conditions.The plan is to encode ethical frameworks, such as Kantianism, Utilitarianism, and W.D. Ross?s moral duties, that will serve as a basis for a robot?s decision-making. Moral emotions will help guide the robot towards the selection of which ethical framework should take dominance for a particular situation and individuals involved. The approach will be tested in two main scenarios dealing with practical ethical decisions. First, a robot playing a game with a child and the role of other-deception for losing on purpose based on the perceived emotional state of the child. Second, a robot interacting with older adults who are performing tasks such as pill sorting and whether the robot should seek to reduce frustration to facilitate training with a potential trade-off on safety. A key goal is to have the robot reproduce an average human ethical choice and/or a choice that reflects the consensus of ethical experts where available.
This award reflects NSF's statutory mission and has been deemed worthy of support through evaluation using the Foundation's intellectual merit and broader impacts review criteria.
|Effective start/end date||2/15/19 → 1/31/23|
- National Science Foundation: $400,000.00