TY - GEN
T1 - To Impress an Algorithm
T2 - 19th International Conference on Wisdom, Well-Being, Win-Win, iConference 2024
AU - Girona, Antonio E.
AU - Yarger, Lynette
N1 - Publisher Copyright:
© The Author(s), under exclusive license to Springer Nature Switzerland AG 2024.
PY - 2024
Y1 - 2024
N2 - Technology firms increasingly leverage artificial intelligence (AI) to enhance human decision-making processes in the rapidly evolving talent acquisition landscape. However, the ramifications of these advancements on workforce diversity remain a topic of intense debate. Drawing upon Gilliland’s procedural justice framework, we explore how IT job candidates interpret the fairness of AI-driven recruitment systems. Gilliland’s model posits that an organization’s adherence to specific fairness principles, such as honesty and the opportunity to perform, profoundly shapes candidates’ self-perceptions, their judgments of the recruitment system’s equity, and the overall attractiveness of the organization. Using focus groups and interviews, we interacted with 47 women, Black and Latinx or Hispanic undergraduates specializing in computer and information science to discern how gender, race, and ethnicity influence attitudes toward AI in hiring. Three procedural justice rules, consistency of administration, job-relatedness, and selection information, emerged as critical in shaping participants’ fairness perceptions. Although discussed less frequently, the propriety of questions held significant resonance for Black and Latinx or Hispanic participants. Our study underscores the critical role of fairness evaluations for organizations, especially those striving to diversify the tech workforce.
AB - Technology firms increasingly leverage artificial intelligence (AI) to enhance human decision-making processes in the rapidly evolving talent acquisition landscape. However, the ramifications of these advancements on workforce diversity remain a topic of intense debate. Drawing upon Gilliland’s procedural justice framework, we explore how IT job candidates interpret the fairness of AI-driven recruitment systems. Gilliland’s model posits that an organization’s adherence to specific fairness principles, such as honesty and the opportunity to perform, profoundly shapes candidates’ self-perceptions, their judgments of the recruitment system’s equity, and the overall attractiveness of the organization. Using focus groups and interviews, we interacted with 47 women, Black and Latinx or Hispanic undergraduates specializing in computer and information science to discern how gender, race, and ethnicity influence attitudes toward AI in hiring. Three procedural justice rules, consistency of administration, job-relatedness, and selection information, emerged as critical in shaping participants’ fairness perceptions. Although discussed less frequently, the propriety of questions held significant resonance for Black and Latinx or Hispanic participants. Our study underscores the critical role of fairness evaluations for organizations, especially those striving to diversify the tech workforce.
UR - http://www.scopus.com/inward/record.url?scp=85192251835&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=85192251835&partnerID=8YFLogxK
U2 - 10.1007/978-3-031-57860-1_4
DO - 10.1007/978-3-031-57860-1_4
M3 - Conference contribution
AN - SCOPUS:85192251835
SN - 9783031578595
T3 - Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)
SP - 43
EP - 61
BT - Wisdom, Well-Being, Win-Win - 19th International Conference, iConference 2024, Proceedings
A2 - Sserwanga, Isaac
A2 - Joho, Hideo
A2 - Ma, Jie
A2 - Hansen, Preben
A2 - Wu, Dan
A2 - Koizumi, Masanori
A2 - Gilliland, Anne J.
PB - Springer Science and Business Media Deutschland GmbH
Y2 - 15 April 2024 through 26 April 2024
ER -