The Longtail Impact of Generative AI on Disinformation: Harmonizing Dichotomous Perspectives

Jason S. Lucas, Barani Maung Maung, Maryam Tabar, Keegan Mcbride, Dongwon Lee, San Murugesan

Research output: Contribution to journalArticlepeer-review

1 Scopus citations

Abstract

Generative AI (GenAI) poses significant risks in creating convincing yet factually ungrounded content, particularly in 'longtail' contexts of high-impact events and resource-limited settings. While some argue that current disinformation ecosystems naturally limit GenAI's impact, we contend that this perspective neglects longtail contexts where disinformation consequences are most profound. This article analyzes the potential impact of GenAI's disinformation in longtail events and settings, focusing on 1) quantity: its ability to flood information ecosystems during critical events; 2) quality: the challenge of distinguishing authentic content from high-quality GenAI content; 3) personalization: its capacity for precise microtargeting exploiting individual vulnerabilities; and 4) hallucination: the danger of unintentional false information generation, especially in high-stakes situations. We then propose strategies to combat disinformation in these contexts. Our analysis underscores the need for proactive measures to mitigate risks, safeguard social unity, and combat the erosion of trust in the GenAI era, particularly in vulnerable communities and during critical events.

Original languageEnglish (US)
Pages (from-to)12-19
Number of pages8
JournalIEEE Intelligent Systems
Volume39
Issue number5
DOIs
StatePublished - 2024

All Science Journal Classification (ASJC) codes

  • Computer Networks and Communications
  • Artificial Intelligence

Fingerprint

Dive into the research topics of 'The Longtail Impact of Generative AI on Disinformation: Harmonizing Dichotomous Perspectives'. Together they form a unique fingerprint.

Cite this