AI in the hands of imperfect users

Kristin M. Kostick-Quenet, Sara Gerke

Research output: Contribution to journalArticlepeer-review

42 Scopus citations

Abstract

As the use of artificial intelligence and machine learning (AI/ML) continues to expand in healthcare, much attention has been given to mitigating bias in algorithms to ensure they are employed fairly and transparently. Less attention has fallen to addressing potential bias among AI/ML’s human users or factors that influence user reliance. We argue for a systematic approach to identifying the existence and impacts of user biases while using AI/ML tools and call for the development of embedded interface design features, drawing on insights from decision science and behavioral economics, to nudge users towards more critical and reflective decision making using AI/ML.

Original languageEnglish (US)
Article number197
Journalnpj Digital Medicine
Volume5
Issue number1
DOIs
StatePublished - Dec 2022

All Science Journal Classification (ASJC) codes

  • Medicine (miscellaneous)
  • Health Informatics
  • Computer Science Applications
  • Health Information Management

Fingerprint

Dive into the research topics of 'AI in the hands of imperfect users'. Together they form a unique fingerprint.

Cite this