TY - GEN
T1 - Examining the Effects of Race on Human-AI Cooperation
AU - Atkins, Akil A.
AU - Brown, Matthew S.
AU - Dancy, Christopher L.
N1 - Publisher Copyright:
© Springer Nature Switzerland AG 2021.
PY - 2021
Y1 - 2021
N2 - Recent literature has shown that racism and implicit racial biases can affect one’s actions in major ways, from the time it takes police to decide whether they shoot an armed suspect, to a decision on whether to trust a stranger. Given that race is a social/power construct, artifacts can also be racialized, and these racialized agents have also been found to be treated differently based on their perceived race. We explored whether people’s decision to cooperate with an AI agent during a task (a modified version of the Stag hunt task) is affected by the knowledge that the AI agent was trained on a population of a particular race (Black, White, or a non-racialized control condition). These data show that White participants performed the best when the agent was racialized as White and not racialized at all, while Black participants achieved the highest score when the agent was racialized as Black. Qualitative data indicated that White participants were less likely to report that they believed that the AI agent was attempting to cooperate during the task and were more likely to report that they doubted the intelligence of the AI agent. This work suggests that racialization of AI agents, even if superficial and not explicitly related to the behavior of that agent, may result in different cooperation behavior with that agent, showing potentially insidious and pervasive effects of racism on the way people interact with AI agents.
AB - Recent literature has shown that racism and implicit racial biases can affect one’s actions in major ways, from the time it takes police to decide whether they shoot an armed suspect, to a decision on whether to trust a stranger. Given that race is a social/power construct, artifacts can also be racialized, and these racialized agents have also been found to be treated differently based on their perceived race. We explored whether people’s decision to cooperate with an AI agent during a task (a modified version of the Stag hunt task) is affected by the knowledge that the AI agent was trained on a population of a particular race (Black, White, or a non-racialized control condition). These data show that White participants performed the best when the agent was racialized as White and not racialized at all, while Black participants achieved the highest score when the agent was racialized as Black. Qualitative data indicated that White participants were less likely to report that they believed that the AI agent was attempting to cooperate during the task and were more likely to report that they doubted the intelligence of the AI agent. This work suggests that racialization of AI agents, even if superficial and not explicitly related to the behavior of that agent, may result in different cooperation behavior with that agent, showing potentially insidious and pervasive effects of racism on the way people interact with AI agents.
UR - http://www.scopus.com/inward/record.url?scp=85138768059&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=85138768059&partnerID=8YFLogxK
U2 - 10.1007/978-3-030-80387-2_27
DO - 10.1007/978-3-030-80387-2_27
M3 - Conference contribution
AN - SCOPUS:85138768059
SN - 9783030803865
T3 - Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)
SP - 279
EP - 288
BT - Social, Cultural, and Behavioral Modeling - 14th International Conference, SBP-BRiMS 2021, Proceedings
A2 - Thomson, Robert
A2 - Hussain, Muhammad Nihal
A2 - Dancy, Christopher
A2 - Pyke, Aryn
PB - Springer Science and Business Media Deutschland GmbH
T2 - 14th International Conference on Social, Cultural, and Behavioral Modeling, SBP-BRiMS 2021
Y2 - 6 July 2021 through 9 July 2021
ER -