TY - GEN
T1 - Information-Theoretic Testing and Debugging of Fairness Defects in Deep Neural Networks
AU - Monjezi, Verya
AU - Trivedi, Ashutosh
AU - Tan, Gang
AU - Tizpaz-Niari, Saeid
N1 - Publisher Copyright:
© 2023 IEEE.
PY - 2023
Y1 - 2023
N2 - The deep feedforward neural networks (DNNs) are increasingly deployed in socioeconomic critical decision support software systems. DNNs are exceptionally good at finding min-imal, sufficient statistical patterns within their training data. Consequently, DNNs may learn to encode decisions-amplifying existing biases or introducing new ones-that may disadvantage protected individuals/groups and may stand to violate legal protections. While the existing search based software testing approaches have been effective in discovering fairness defects, they do not supplement these defects with debugging aids-such as severity and causal explanations-crucial to help developers triage and decide on the next course of action. Can we measure the severity of fairness defects in DNNs? Are these defects symptomatic of improper training or they merely reflect biases present in the training data? To answer such questions, we present Dice: an information-theoretic testing and debugging framework to discover and localize fairness defects in DNNs. The key goal of Dice is to assist software developers in triaging fairness defects by ordering them by their severity. Towards this goal, we quantify fairness in terms of protected information (in bits) used in decision making. A quantitative view of fairness defects not only helps in ordering these defects, our empirical evaluation shows that it improves the search efficiency due to resulting smoothness of the search space. Guided by the quan-titative fairness, we present a causal debugging framework to localize inadequately trained layers and neurons responsible for fairness defects. Our experiments over ten DNNs, developed for socially critical tasks, show that Dice efficiently characterizes the amounts of discrimination, effectively generates discriminatory instances (vis-a-vis the state-of-the-art techniques), and localizes layers/neurons with significant biases.
AB - The deep feedforward neural networks (DNNs) are increasingly deployed in socioeconomic critical decision support software systems. DNNs are exceptionally good at finding min-imal, sufficient statistical patterns within their training data. Consequently, DNNs may learn to encode decisions-amplifying existing biases or introducing new ones-that may disadvantage protected individuals/groups and may stand to violate legal protections. While the existing search based software testing approaches have been effective in discovering fairness defects, they do not supplement these defects with debugging aids-such as severity and causal explanations-crucial to help developers triage and decide on the next course of action. Can we measure the severity of fairness defects in DNNs? Are these defects symptomatic of improper training or they merely reflect biases present in the training data? To answer such questions, we present Dice: an information-theoretic testing and debugging framework to discover and localize fairness defects in DNNs. The key goal of Dice is to assist software developers in triaging fairness defects by ordering them by their severity. Towards this goal, we quantify fairness in terms of protected information (in bits) used in decision making. A quantitative view of fairness defects not only helps in ordering these defects, our empirical evaluation shows that it improves the search efficiency due to resulting smoothness of the search space. Guided by the quan-titative fairness, we present a causal debugging framework to localize inadequately trained layers and neurons responsible for fairness defects. Our experiments over ten DNNs, developed for socially critical tasks, show that Dice efficiently characterizes the amounts of discrimination, effectively generates discriminatory instances (vis-a-vis the state-of-the-art techniques), and localizes layers/neurons with significant biases.
UR - https://www.scopus.com/pages/publications/85167877966
UR - https://www.scopus.com/pages/publications/85167877966#tab=citedBy
U2 - 10.1109/ICSE48619.2023.00136
DO - 10.1109/ICSE48619.2023.00136
M3 - Conference contribution
AN - SCOPUS:85167877966
T3 - Proceedings - International Conference on Software Engineering
SP - 1571
EP - 1582
BT - Proceedings - 2023 IEEE/ACM 45th International Conference on Software Engineering, ICSE 2023
PB - IEEE Computer Society
T2 - 45th IEEE/ACM International Conference on Software Engineering, ICSE 2023
Y2 - 15 May 2023 through 16 May 2023
ER -