Human-Aligned AI Must Counter Overtrust

Research output: Chapter in Book/Report/Conference proceedingConference contribution

Abstract

The psychological reality of human baseline overtrust in AI has been increasingly recognized in recent years. Here, we argue that for human-aligned AI to successfully advance human goals and welfare, in many contexts it will need to gauge - and counter if need be - human propensities for overtrust. We briefly summarize our original program of research documenting overtrust in contexts of grave decisionmaking, and provide suggestions for ways that artificial agents might be prepared to estimate and respond to human overtrust.

Original languageEnglish (US)
Title of host publicationProceedings - 2025 IEEE Conference on Artificial Intelligence, CAI 2025
PublisherInstitute of Electrical and Electronics Engineers Inc.
Pages1239-1242
Number of pages4
ISBN (Electronic)9798331524005
DOIs
StatePublished - 2025
Event3rd IEEE Conference on Artificial Intelligence, CAI 2025 - Santa Clara, United States
Duration: May 5 2025May 7 2025

Publication series

NameProceedings - 2025 IEEE Conference on Artificial Intelligence, CAI 2025

Conference

Conference3rd IEEE Conference on Artificial Intelligence, CAI 2025
Country/TerritoryUnited States
CitySanta Clara
Period5/5/255/7/25

All Science Journal Classification (ASJC) codes

  • Artificial Intelligence
  • Computer Science Applications
  • Computer Vision and Pattern Recognition
  • Information Systems and Management
  • Modeling and Simulation

Fingerprint

Dive into the research topics of 'Human-Aligned AI Must Counter Overtrust'. Together they form a unique fingerprint.

Cite this