TY - GEN
T1 - Invisible influence
T2 - 2nd AAAI/ACM Conference on AI, Ethics, and Society, AIES 2019
AU - Susser, Daniel
PY - 2019/1/27
Y1 - 2019/1/27
N2 - For several years, scholars have (for good reason) been largely preoccupied with worries about the use of artificial intelligence and machine learning (AI/ML) tools to make decisions about us. Only recently has significant attention turned to a potentially more alarming problem: the use of AI/ML to influence our decision-making. The contexts in which we make decisions-what behavioral economists call our choice architectures-are increasingly technologically-laden. Which is to say: algorithms increasingly determine, in a wide variety of contexts, both the sets of options we choose from and the way those options are framed. Moreover, artificial intelligence and machine learning (AI/ML) makes it possible for those options and their framings-the choice architectures-to be tailored to the individual chooser. They are constructed based on information collected about our individual preferences, interests, aspirations, and vulnerabilities, with the goal of influencing our decisions. At the same time, because we are habituated to these technologies we pay them little notice. They are, as philosophers of technology put it, transparent to us-effectively invisible. I argue that this invisible layer of technological mediation, which structures and influences our decision-making, renders us deeply susceptible to manipulation. Absent a guarantee that these technologies are not being used to manipulate and exploit, individuals will have little reason to trust them.
AB - For several years, scholars have (for good reason) been largely preoccupied with worries about the use of artificial intelligence and machine learning (AI/ML) tools to make decisions about us. Only recently has significant attention turned to a potentially more alarming problem: the use of AI/ML to influence our decision-making. The contexts in which we make decisions-what behavioral economists call our choice architectures-are increasingly technologically-laden. Which is to say: algorithms increasingly determine, in a wide variety of contexts, both the sets of options we choose from and the way those options are framed. Moreover, artificial intelligence and machine learning (AI/ML) makes it possible for those options and their framings-the choice architectures-to be tailored to the individual chooser. They are constructed based on information collected about our individual preferences, interests, aspirations, and vulnerabilities, with the goal of influencing our decisions. At the same time, because we are habituated to these technologies we pay them little notice. They are, as philosophers of technology put it, transparent to us-effectively invisible. I argue that this invisible layer of technological mediation, which structures and influences our decision-making, renders us deeply susceptible to manipulation. Absent a guarantee that these technologies are not being used to manipulate and exploit, individuals will have little reason to trust them.
UR - http://www.scopus.com/inward/record.url?scp=85067389036&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=85067389036&partnerID=8YFLogxK
U2 - 10.1145/3306618.3314286
DO - 10.1145/3306618.3314286
M3 - Conference contribution
T3 - AIES 2019 - Proceedings of the 2019 AAAI/ACM Conference on AI, Ethics, and Society
SP - 403
EP - 408
BT - AIES 2019 - Proceedings of the 2019 AAAI/ACM Conference on AI, Ethics, and Society
PB - Association for Computing Machinery, Inc
Y2 - 27 January 2019 through 28 January 2019
ER -