TY - JOUR
T1 - Moral decision making in autonomous systems
T2 - Enforcement, moral emotions, dignity, trust, and deception
AU - Arkin, Ronald Craig
AU - Ulam, Patrick
AU - Wagner, Alan R.
N1 - Funding Information:
Ronald Craig Arkin (Fellow, IEEE) received the B.S. degree from the University of Michigan, the M.S. degree from Stevens Institute of Technology, and the Ph.D. degree in computer science from the University of Massachusetts, Amherst, in 1987. He is Regents’ Professor and Director of the Mobile Robot Laboratory at the Georgia Institute of Technology (Georgia Tech), Atlanta. He also serves as the Associate Dean for Research and Space Planning in the College of Computing at Georgia Tech. His research interests include behavior-based reactive control and action-oriented perception for mobile robots and unmanned aerial vehicles, hybrid deliberative/reactive software architectures, robot survivability, multiagent robotic systems, biorobotics, human– robot interaction, robot ethics, and learning in autonomous systems. He has over 170 technical publications in these areas. He wrote a textbook entitled Behavior-Based Robotics (Cambridge, MA: MIT Press: 1998), a book entitled Robot Colonies (New York, NY: Springer-Verlag: 1997), and a recent book entitled Governing Lethal Behavior in Autonomous Robots (London, U.K.: Taylor & Francis, 2009). Funding sources have included the National Science Foundation, DARPA, DTRA, U.S. Army, Savannah River Technology Center, Honda R&D, Samsung, C.S. Draper Laboratory, SAIC, NAVAIR, and the Office of Naval Research.
Funding Information:
Manuscript received August 1, 2010; revised April 23, 2011; accepted July 29, 2011. Date of publication December 9, 2011; date of current version February 17, 2012. This work was supported in part by the U.S. Army Research Office under Contract W911NF-06-1-0252 and by the U.S. Office of Naval Research under MURI Grant N00014-08-1-0696. The authors are with Georgia Institute of Technology, Atlanta, GA 30308 USA (e-mail: [email protected]; [email protected]; [email protected]).
PY - 2012/3
Y1 - 2012/3
N2 - As humans are being progressively pushed further downstream in the decision-making process of autonomous systems, the need arises to ensure that moral standards, however defined, are adhered to by these robotic artifacts. While meaningful inroads have been made in this area regarding the use of ethical lethal military robots, including work by our laboratory, these needs transcend the warfighting domain and are pervasive, extending to eldercare, robot nannies, and other forms of service and entertainment robotic platforms. This paper presents an overview of the spectrum and specter of ethical issues raised by the advent of these systems, and various technical results obtained to date by our research group, geared towards managing ethical behavior in autonomous robots in relation to humanity. This includes: 1) the use of an ethical governor capable of restricting robotic behavior to predefined social norms; 2) an ethical adaptor which draws upon the moral emotions to allow a system to constructively and proactively modify its behavior based on the consequences of its actions; 3) the development of models of robotic trust in humans and its dual, deception, drawing on psychological models of interdependence theory; and 4) concluding with an approach towards the maintenance of dignity in human-robot relationships.
AB - As humans are being progressively pushed further downstream in the decision-making process of autonomous systems, the need arises to ensure that moral standards, however defined, are adhered to by these robotic artifacts. While meaningful inroads have been made in this area regarding the use of ethical lethal military robots, including work by our laboratory, these needs transcend the warfighting domain and are pervasive, extending to eldercare, robot nannies, and other forms of service and entertainment robotic platforms. This paper presents an overview of the spectrum and specter of ethical issues raised by the advent of these systems, and various technical results obtained to date by our research group, geared towards managing ethical behavior in autonomous robots in relation to humanity. This includes: 1) the use of an ethical governor capable of restricting robotic behavior to predefined social norms; 2) an ethical adaptor which draws upon the moral emotions to allow a system to constructively and proactively modify its behavior based on the consequences of its actions; 3) the development of models of robotic trust in humans and its dual, deception, drawing on psychological models of interdependence theory; and 4) concluding with an approach towards the maintenance of dignity in human-robot relationships.
UR - http://www.scopus.com/inward/record.url?scp=84857370423&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=84857370423&partnerID=8YFLogxK
U2 - 10.1109/JPROC.2011.2173265
DO - 10.1109/JPROC.2011.2173265
M3 - Article
AN - SCOPUS:84857370423
SN - 0018-9219
VL - 100
SP - 571
EP - 589
JO - Proceedings of the IEEE
JF - Proceedings of the IEEE
IS - 3
M1 - 6099675
ER -