CRII: HCC: Empowering Human-AI Collaboration through Conversational Explanations

  • Tsai, Chun-Hua (PI)
  • Fraser, Alastair A. (CoI)
  • Muxworthy, A. R. (CoI)
  • Perkins, Joseph J. (CoI)

Project: Research project

Project Details

Description

This award is funded in whole or in part under the American Rescue Plan Act of 2021 (Public Law 117-2).Research has demonstrated that providing explanations about recommendation systems positively affect users' experiences. Accordingly, developers have adopted many explainable recommendation models and interfaces in applications such as social media. However, these explanations are not personalized to users with varied digital literacy or computational knowledge and may not always ensure users understand the underlying rationale contributing to data or algorithms. This project fills this research gap by exploring the dynamic process of a user's understanding of an AI-based, explainable recommender system and how this understanding evolves. In addition, it extends the existing research horizon on explainable recommender systems by investigating a novel conversational interaction between people and target systems. Finally, this work will enable a new ecosystem through a unified research program consisting of modeling explanation strategies for different users and exploring and developing practical system prototypes. The project's success will deepen the scientific understanding of designing and implementing fair and transparent everyday use AIsystems for users of varied backgrounds and expertise. For instance, users may seek personalized explanations from a health recommendation system and make an informed decision based on transparency and comprehension.This project aims to design and develop a conversational agent to provide personalized explanations around AI-based results and recommendations. The work will use participatory design to explore user mental models regarding machine-generated explanations, develop a conversational agent to elicit the user's mental model, and provide personalized explanations based on the inferred model. This project aims to achieve three objectives based on a state-of-the-art explainable hybrid recommender system. First, this project conducts a user-centric participatory design focus group to measure and estimate the user's mental model of interacting with multi-level granularity explanations for text, topic, and social recommendation models. Second, this project utilizes a human-centered design approach to develop a prototype that allowsdynamical elicitation mechanisms and provides conversational explanations. Third, this project adopts mixed methods to investigate and theorize the user's experience andcomprehension of interacting with the developed conversation-based explanation in hybrid recommender systems.This award reflects NSF's statutory mission and has been deemed worthy of support through evaluation using the Foundation's intellectual merit and broader impacts review criteria.
StatusActive
Effective start/end date10/1/187/31/25

Funding

  • National Science Foundation: $174,293.00

Fingerprint

Explore the research topics touched on by this project. These labels are generated based on the underlying awards/grants. Together they form a unique fingerprint.