TY - JOUR
T1 - Proximity-Constrained Counterfactuals for Explainable Diabetes Risk Assessment
AU - Kabir, Md Faisal
AU - Mahakal, Abhishek Mandar
AU - Wang, Yucheng
AU - AlSobeh, Anas
AU - Al-Ahmad, Bilal
AU - Alkhawaldah, Rami S.
N1 - Publisher Copyright:
Copyright © 2025 Md Faisal Kabir et al. Applied Computational Intelligence and Soft Computing published by John Wiley & Sons Ltd.
PY - 2025
Y1 - 2025
N2 - In the era of widespread machine learning models, the opaque nature of their decision-making processes poses significant risks, especially in critical domains like healthcare. This paper aims to support nondiabetic individuals by helping them understand potential risk factors associated with developing diabetes, thereby enabling them to take preventive measures. This study highlights the crucial need for explainable artificial intelligence by presenting a pragmatic approach to generate proximity-constrained counterfactuals for assessing diabetic risk. By synergistically combining the capabilities of Local Interpretable Model-Agnostic Explanations and Diverse Counterfactual Explanations, our methodology focuses on perturbing the most influential features identified by different AI explainers to reverse decisions while maintaining proximity to the decision boundary. The resulting counterfactuals offer actionable insights, enabling individuals to actively manage lifestyle choices and potentially reduce their diabetes risk. Our findings demonstrate the effective implementation of this strategy, underscoring its practicality and user-oriented decision-support capabilities. It addresses constraints and integrates human assessments for a more comprehensive evaluation. The proposed approach contributes to enhancing trust and transparency in machine learning models for diabetic risk prediction through interpretable and actionable counterfactual explanations.
AB - In the era of widespread machine learning models, the opaque nature of their decision-making processes poses significant risks, especially in critical domains like healthcare. This paper aims to support nondiabetic individuals by helping them understand potential risk factors associated with developing diabetes, thereby enabling them to take preventive measures. This study highlights the crucial need for explainable artificial intelligence by presenting a pragmatic approach to generate proximity-constrained counterfactuals for assessing diabetic risk. By synergistically combining the capabilities of Local Interpretable Model-Agnostic Explanations and Diverse Counterfactual Explanations, our methodology focuses on perturbing the most influential features identified by different AI explainers to reverse decisions while maintaining proximity to the decision boundary. The resulting counterfactuals offer actionable insights, enabling individuals to actively manage lifestyle choices and potentially reduce their diabetes risk. Our findings demonstrate the effective implementation of this strategy, underscoring its practicality and user-oriented decision-support capabilities. It addresses constraints and integrates human assessments for a more comprehensive evaluation. The proposed approach contributes to enhancing trust and transparency in machine learning models for diabetic risk prediction through interpretable and actionable counterfactual explanations.
UR - https://www.scopus.com/pages/publications/105025212202
UR - https://www.scopus.com/pages/publications/105025212202#tab=citedBy
U2 - 10.1155/acis/3424976
DO - 10.1155/acis/3424976
M3 - Article
AN - SCOPUS:105025212202
SN - 1687-9724
VL - 2025
JO - Applied Computational Intelligence and Soft Computing
JF - Applied Computational Intelligence and Soft Computing
IS - 1
M1 - 3424976
ER -