TY - JOUR
T1 - The Longtail Impact of Generative AI on Disinformation
T2 - Harmonizing Dichotomous Perspectives
AU - Lucas, Jason S.
AU - Maung, Barani Maung
AU - Tabar, Maryam
AU - Mcbride, Keegan
AU - Lee, Dongwon
AU - Murugesan, San
N1 - Publisher Copyright:
© 2001-2011 IEEE.
PY - 2024
Y1 - 2024
N2 - Generative AI (GenAI) poses significant risks in creating convincing yet factually ungrounded content, particularly in 'longtail' contexts of high-impact events and resource-limited settings. While some argue that current disinformation ecosystems naturally limit GenAI's impact, we contend that this perspective neglects longtail contexts where disinformation consequences are most profound. This article analyzes the potential impact of GenAI's disinformation in longtail events and settings, focusing on 1) quantity: its ability to flood information ecosystems during critical events; 2) quality: the challenge of distinguishing authentic content from high-quality GenAI content; 3) personalization: its capacity for precise microtargeting exploiting individual vulnerabilities; and 4) hallucination: the danger of unintentional false information generation, especially in high-stakes situations. We then propose strategies to combat disinformation in these contexts. Our analysis underscores the need for proactive measures to mitigate risks, safeguard social unity, and combat the erosion of trust in the GenAI era, particularly in vulnerable communities and during critical events.
AB - Generative AI (GenAI) poses significant risks in creating convincing yet factually ungrounded content, particularly in 'longtail' contexts of high-impact events and resource-limited settings. While some argue that current disinformation ecosystems naturally limit GenAI's impact, we contend that this perspective neglects longtail contexts where disinformation consequences are most profound. This article analyzes the potential impact of GenAI's disinformation in longtail events and settings, focusing on 1) quantity: its ability to flood information ecosystems during critical events; 2) quality: the challenge of distinguishing authentic content from high-quality GenAI content; 3) personalization: its capacity for precise microtargeting exploiting individual vulnerabilities; and 4) hallucination: the danger of unintentional false information generation, especially in high-stakes situations. We then propose strategies to combat disinformation in these contexts. Our analysis underscores the need for proactive measures to mitigate risks, safeguard social unity, and combat the erosion of trust in the GenAI era, particularly in vulnerable communities and during critical events.
UR - http://www.scopus.com/inward/record.url?scp=85206800792&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=85206800792&partnerID=8YFLogxK
U2 - 10.1109/MIS.2024.3439109
DO - 10.1109/MIS.2024.3439109
M3 - Article
AN - SCOPUS:85206800792
SN - 1541-1672
VL - 39
SP - 12
EP - 19
JO - IEEE Intelligent Systems
JF - IEEE Intelligent Systems
IS - 5
ER -