TY - GEN
T1 - Unmasking Nationality Bias
T2 - 2023 AAAI / ACM Conference on Artificial Intelligence, Ethics, and Society, AIES 2023
AU - Narayanan Venkit, Pranav
AU - Gautam, Sanjana
AU - Panchanadikar, Ruchi
AU - Huang, Ting Hao
AU - Wilson, Shomir
N1 - Publisher Copyright:
© 2023 ACM.
PY - 2023/8/8
Y1 - 2023/8/8
N2 - We investigate the potential for nationality biases in natural language processing (NLP) models using human evaluation methods. Biased NLP models can perpetuate stereotypes and lead to algorithmic discrimination, posing a significant challenge to the fairness and justice of AI systems. Our study employs a two-step mixed-methods approach that includes both quantitative and qualitative analysis to identify and understand the impact of nationality bias in a text generation model. Through our human-centered quantitative analysis, we measure the extent of nationality bias in articles generated by AI sources. We then conduct open-ended interviews with participants, performing qualitative coding and thematic analysis to understand the implications of these biases on human readers. Our findings reveal that biased NLP models tend to replicate and amplify existing societal biases, which can translate to harm if used in a sociotechnical setting. The qualitative analysis from our interviews offers insights into the experience readers have when encountering such articles, highlighting the potential to shift a reader's perception of a country. These findings emphasize the critical role of public perception in shaping AI's impact on society and the need to correct biases in AI systems.
AB - We investigate the potential for nationality biases in natural language processing (NLP) models using human evaluation methods. Biased NLP models can perpetuate stereotypes and lead to algorithmic discrimination, posing a significant challenge to the fairness and justice of AI systems. Our study employs a two-step mixed-methods approach that includes both quantitative and qualitative analysis to identify and understand the impact of nationality bias in a text generation model. Through our human-centered quantitative analysis, we measure the extent of nationality bias in articles generated by AI sources. We then conduct open-ended interviews with participants, performing qualitative coding and thematic analysis to understand the implications of these biases on human readers. Our findings reveal that biased NLP models tend to replicate and amplify existing societal biases, which can translate to harm if used in a sociotechnical setting. The qualitative analysis from our interviews offers insights into the experience readers have when encountering such articles, highlighting the potential to shift a reader's perception of a country. These findings emphasize the critical role of public perception in shaping AI's impact on society and the need to correct biases in AI systems.
UR - http://www.scopus.com/inward/record.url?scp=85173612839&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=85173612839&partnerID=8YFLogxK
U2 - 10.1145/3600211.3604667
DO - 10.1145/3600211.3604667
M3 - Conference contribution
AN - SCOPUS:85173612839
T3 - AIES 2023 - Proceedings of the 2023 AAAI/ACM Conference on AI, Ethics, and Society
SP - 554
EP - 565
BT - AIES 2023 - Proceedings of the 2023 AAAI/ACM Conference on AI, Ethics, and Society
PB - Association for Computing Machinery, Inc
Y2 - 8 August 2023 through 10 August 2023
ER -