TY - JOUR
T1 - Learning fair models without sensitive attributes
T2 - A generative approach
AU - Zhu, Huaisheng
AU - Dai, Enyan
AU - Liu, Hui
AU - Wang, Suhang
N1 - Publisher Copyright:
© 2023 Elsevier B.V.
PY - 2023/12/7
Y1 - 2023/12/7
N2 - Most existing fair classifiers rely on sensitive attributes to achieve fairness. However, for many scenarios, we cannot obtain sensitive attributes due to privacy and legal issues. The lack of sensitive attributes challenges many existing fair classifiers. Though we lack sensitive attributes, for many applications, there usually exists features/information of various formats that are relevant to sensitive attributes. For example, a person's purchase history can reflect his/her race, which would help for learning fair classifiers on race. However, the work on exploring relevant features for learning fair models without sensitive attributes is rather limited. Therefore, in this paper, we study a novel problem of learning fair models without sensitive attributes by exploring relevant features. We propose a probabilistic generative framework to effectively estimate the sensitive attribute from the training data with relevant features in various formats and utilize the estimated sensitive attribute information to learn fair models. Experimental results on real-world datasets show the effectiveness of our framework in terms of both accuracy and fairness. Our source code is available at: https://github.com/huaishengzhu/FairWS.
AB - Most existing fair classifiers rely on sensitive attributes to achieve fairness. However, for many scenarios, we cannot obtain sensitive attributes due to privacy and legal issues. The lack of sensitive attributes challenges many existing fair classifiers. Though we lack sensitive attributes, for many applications, there usually exists features/information of various formats that are relevant to sensitive attributes. For example, a person's purchase history can reflect his/her race, which would help for learning fair classifiers on race. However, the work on exploring relevant features for learning fair models without sensitive attributes is rather limited. Therefore, in this paper, we study a novel problem of learning fair models without sensitive attributes by exploring relevant features. We propose a probabilistic generative framework to effectively estimate the sensitive attribute from the training data with relevant features in various formats and utilize the estimated sensitive attribute information to learn fair models. Experimental results on real-world datasets show the effectiveness of our framework in terms of both accuracy and fairness. Our source code is available at: https://github.com/huaishengzhu/FairWS.
UR - http://www.scopus.com/inward/record.url?scp=85173611135&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=85173611135&partnerID=8YFLogxK
U2 - 10.1016/j.neucom.2023.126841
DO - 10.1016/j.neucom.2023.126841
M3 - Article
AN - SCOPUS:85173611135
SN - 0925-2312
VL - 561
JO - Neurocomputing
JF - Neurocomputing
M1 - 126841
ER -