Skip to main navigation Skip to search Skip to main content

A Conceptual Model of Trust in Generative AI Systems

Research output: Chapter in Book/Report/Conference proceedingConference contribution

Abstract

Generative Artificial Intelligence (GAI) significantly impacts various sectors, offering innovative solutions in consultation, self-education, and creativity. However, the trustworthiness of GAI outputs is questionable due to the absence of theoretical correctness guarantees and the opacity of Artificial Intelligence (AI) processes. These issues, compounded by potential biases and inaccuracies, pose challenges to GAI adoption. This paper delves into the trust dynamics in GAI, highlighting its unique capabilities to generate novel outputs and adapt over time, distinct from traditional AI. We introduce a model analyzing trust in GAI through user experience, operational capabilities, contextual factors, and task types. This work aims to enrich the theoretical discourse and practical approaches in GAI, setting a foundation for future research and applications.

Original languageEnglish (US)
Title of host publicationProceedings of the 58th Hawaii International Conference on System Sciences, HICSS 2025
EditorsTung X. Bui
PublisherIEEE Computer Society
Pages7019-7028
Number of pages10
ISBN (Electronic)9780998133188
DOIs
StatePublished - 2025
Event58th Hawaii International Conference on System Sciences, HICSS 2025 - Honolulu, United States
Duration: Jan 7 2025Jan 10 2025

Publication series

NameProceedings of the Annual Hawaii International Conference on System Sciences
ISSN (Print)1530-1605

Conference

Conference58th Hawaii International Conference on System Sciences, HICSS 2025
Country/TerritoryUnited States
CityHonolulu
Period1/7/251/10/25

All Science Journal Classification (ASJC) codes

  • General Engineering

Fingerprint

Dive into the research topics of 'A Conceptual Model of Trust in Generative AI Systems'. Together they form a unique fingerprint.

Cite this