A Study of Implicit Language Model Bias Against People With Disabilities

Pranav Narayanan Venkit, Mukund Srinath, Shomir Wilson

Research output: Contribution to journalConference articlepeer-review

38 Scopus citations

Abstract

Pretrained language models (PLMs) have been shown to exhibit sociodemographic biases, such as against gender and race, raising concerns of downstream biases in language technologies. However, PLMs’ biases against people with disabilities (PWDs) have received little attention, in spite of their potential to cause similar harms. Using perturbation sensitivity analysis, we test an assortment of popular word embedding-based and transformer-based PLMs and show significant biases against PWDs in all of them. The results demonstrate how models trained on large corpora widely favor ableist language.

Original languageEnglish (US)
Pages (from-to)1324-1332
Number of pages9
JournalProceedings - International Conference on Computational Linguistics, COLING
Volume29
Issue number1
StatePublished - 2022
Event29th International Conference on Computational Linguistics, COLING 2022 - Gyeongju, Korea, Republic of
Duration: Oct 12 2022Oct 17 2022

All Science Journal Classification (ASJC) codes

  • Computational Theory and Mathematics
  • Computer Science Applications
  • Theoretical Computer Science

Fingerprint

Dive into the research topics of 'A Study of Implicit Language Model Bias Against People With Disabilities'. Together they form a unique fingerprint.

Cite this