TY - GEN
T1 - NeuFair
T2 - 33rd ACM SIGSOFT International Symposium on Software Testing and Analysis, ISSTA 2024
AU - Dasu, Vishnu Asutosh
AU - Kumar, Ashish
AU - Tizpaz-Niari, Saeid
AU - Tan, Gang
N1 - Publisher Copyright:
© 2024 Copyright is held by the owner/author(s). Publication rights licensed to ACM.
PY - 2024/9/11
Y1 - 2024/9/11
N2 - This paper investigates neuron dropout as a post-processing bias mitigation method for deep neural networks (DNNs). Neural-driven software solutions are increasingly applied in socially critical domains with significant fairness implications. While DNNs are exceptional at learning statistical patterns from data, they may encode and amplify historical biases. Existing bias mitigation algorithms often require modifying the input dataset or the learning algorithms. We posit that prevalent dropout methods may be an effective and less intrusive approach to improve fairness of pre-trained DNNs during inference. However, finding the ideal set of neurons to drop is a combinatorial problem. We propose NeuFair, a family of post-processing randomized algorithms that mitigate unfairness in pre-trained DNNs via dropouts during inference. Our randomized search is guided by an objective to minimize discrimination while maintaining the model's utility. We show that NeuFair is efficient and effective in improving fairness (up to 69%) with minimal or no model performance degradation. We provide intuitive explanations of these phenomena and carefully examine the influence of various hyperparameters of NeuFair on the results. Finally, we empirically and conceptually compare NeuFair to different state-of-the-art bias mitigators.
AB - This paper investigates neuron dropout as a post-processing bias mitigation method for deep neural networks (DNNs). Neural-driven software solutions are increasingly applied in socially critical domains with significant fairness implications. While DNNs are exceptional at learning statistical patterns from data, they may encode and amplify historical biases. Existing bias mitigation algorithms often require modifying the input dataset or the learning algorithms. We posit that prevalent dropout methods may be an effective and less intrusive approach to improve fairness of pre-trained DNNs during inference. However, finding the ideal set of neurons to drop is a combinatorial problem. We propose NeuFair, a family of post-processing randomized algorithms that mitigate unfairness in pre-trained DNNs via dropouts during inference. Our randomized search is guided by an objective to minimize discrimination while maintaining the model's utility. We show that NeuFair is efficient and effective in improving fairness (up to 69%) with minimal or no model performance degradation. We provide intuitive explanations of these phenomena and carefully examine the influence of various hyperparameters of NeuFair on the results. Finally, we empirically and conceptually compare NeuFair to different state-of-the-art bias mitigators.
UR - http://www.scopus.com/inward/record.url?scp=85205567869&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=85205567869&partnerID=8YFLogxK
U2 - 10.1145/3650212.3680380
DO - 10.1145/3650212.3680380
M3 - Conference contribution
AN - SCOPUS:85205567869
T3 - ISSTA 2024 - Proceedings of the 33rd ACM SIGSOFT International Symposium on Software Testing and Analysis
SP - 1541
EP - 1553
BT - ISSTA 2024 - Proceedings of the 33rd ACM SIGSOFT International Symposium on Software Testing and Analysis
A2 - Christakis, Maria
A2 - Pradel, Michael
PB - Association for Computing Machinery, Inc
Y2 - 16 September 2024 through 20 September 2024
ER -