TY - GEN
T1 - FairLay-ML
T2 - 47th IEEE/ACM International Conference on Software Engineering, ICSE-Companion 2025
AU - Yu, Normen
AU - Carreon, Luciana
AU - Tan, Gang
AU - Tizpaz-Niari, Saeid
N1 - Publisher Copyright:
© 2025 IEEE.
PY - 2025
Y1 - 2025
N2 - Data-driven software solutions have significantly been used in critical domains with significant socio-economic, legal, and ethical implications. The rapid adoptions of data-driven solutions, however, pose major threats to the trustworthiness of automated decision-support software. A diminished understanding of the solution by the developer and historical/current biases in the data sets are primary challenges. To aid data-driven software developers and end-users, we present FairLay-ML, a debugging tool to test and explain the fairness implications of data-driven solutions. FairLay-ML visualizes the logic of datasets, trained models, and decisions for a given data point. In addition, it trains various models with varying fairness-accuracy tradeoffs. Crucially, FairLay-ML incorporates counterfactual fairness testing that finds bugs beyond the development datasets. We conducted two studies through FairLay-ML that allowed us to measure false positives/negatives in prevalent counterfactual testing and understand the human perception of counterfactual test cases in a class survey. FairLay-ML and its benchmarks are publicly available at https://github.com/Pennswood/FairLay-ML. The live version of the tool is available at https://fairlayml-v2.streamlit.app/. We provide a video demo of the tool at https://youtu.be/wNI9UWkywVU?t=133.
AB - Data-driven software solutions have significantly been used in critical domains with significant socio-economic, legal, and ethical implications. The rapid adoptions of data-driven solutions, however, pose major threats to the trustworthiness of automated decision-support software. A diminished understanding of the solution by the developer and historical/current biases in the data sets are primary challenges. To aid data-driven software developers and end-users, we present FairLay-ML, a debugging tool to test and explain the fairness implications of data-driven solutions. FairLay-ML visualizes the logic of datasets, trained models, and decisions for a given data point. In addition, it trains various models with varying fairness-accuracy tradeoffs. Crucially, FairLay-ML incorporates counterfactual fairness testing that finds bugs beyond the development datasets. We conducted two studies through FairLay-ML that allowed us to measure false positives/negatives in prevalent counterfactual testing and understand the human perception of counterfactual test cases in a class survey. FairLay-ML and its benchmarks are publicly available at https://github.com/Pennswood/FairLay-ML. The live version of the tool is available at https://fairlayml-v2.streamlit.app/. We provide a video demo of the tool at https://youtu.be/wNI9UWkywVU?t=133.
UR - https://www.scopus.com/pages/publications/105008499471
UR - https://www.scopus.com/inward/citedby.url?scp=105008499471&partnerID=8YFLogxK
U2 - 10.1109/ICSE-Companion66252.2025.00016
DO - 10.1109/ICSE-Companion66252.2025.00016
M3 - Conference contribution
AN - SCOPUS:105008499471
T3 - Proceedings - International Conference on Software Engineering
SP - 25
EP - 28
BT - Proceedings - 2025 IEEE/ACM 47th International Conference on Software Engineering, ICSE-Companion 2025
PB - IEEE Computer Society
Y2 - 27 April 2025 through 3 May 2025
ER -