TY - GEN
T1 - Does Human Collaboration Enhance the Accuracy of Identifying LLM-Generated Deepfake Texts?
AU - Uchendu, Adaku
AU - Lee, Jooyoung
AU - Shen, Hua
AU - Le, Thai
AU - Kenneth, Ting Hao
AU - Lee, Dongwon
N1 - Publisher Copyright:
© 2023, Association for the Advancement of Artificial Intelligence. All rights reserved.
PY - 2023
Y1 - 2023
N2 - Advances in Large Language Models (e.g., GPT-4, LLaMA) have improved the generation of coherent sentences resembling human writing on a large scale, resulting in the creation of so-called deepfake texts. However, this progress poses security and privacy concerns, necessitating efective solutions for distinguishing deepfake texts from human-written ones. Although prior works studied humans’ ability to detect deepfake texts, none has examined whether “collaboration” among humans improves the detection of deepfake texts. In this study, to address this gap of understanding on deepfake texts, we conducted experiments with two groups: (1) nonexpert individuals from the AMT platform and (2) writing experts from the Upwork platform. The results demonstrate that collaboration among humans can potentially improve the detection of deepfake texts for both groups, increasing detection accuracies by 6.36% for non-experts and 12.76% for experts, respectively, compared to individuals’ detection accuracies. We further analyze the explanations that humans used for detecting a piece of text as deepfake text, and fnd that the strongest indicator of deepfake texts is their lack of coherence and consistency. Our study provides useful insights for future tools and framework designs to facilitate the collaborative human detection of deepfake texts. The experiment datasets and AMT implementations are available at: https: //github.com/huashen218/llm-deepfake-human-study.git
AB - Advances in Large Language Models (e.g., GPT-4, LLaMA) have improved the generation of coherent sentences resembling human writing on a large scale, resulting in the creation of so-called deepfake texts. However, this progress poses security and privacy concerns, necessitating efective solutions for distinguishing deepfake texts from human-written ones. Although prior works studied humans’ ability to detect deepfake texts, none has examined whether “collaboration” among humans improves the detection of deepfake texts. In this study, to address this gap of understanding on deepfake texts, we conducted experiments with two groups: (1) nonexpert individuals from the AMT platform and (2) writing experts from the Upwork platform. The results demonstrate that collaboration among humans can potentially improve the detection of deepfake texts for both groups, increasing detection accuracies by 6.36% for non-experts and 12.76% for experts, respectively, compared to individuals’ detection accuracies. We further analyze the explanations that humans used for detecting a piece of text as deepfake text, and fnd that the strongest indicator of deepfake texts is their lack of coherence and consistency. Our study provides useful insights for future tools and framework designs to facilitate the collaborative human detection of deepfake texts. The experiment datasets and AMT implementations are available at: https: //github.com/huashen218/llm-deepfake-human-study.git
UR - http://www.scopus.com/inward/record.url?scp=85208200631&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=85208200631&partnerID=8YFLogxK
U2 - 10.1609/hcomp.v11i1.27557
DO - 10.1609/hcomp.v11i1.27557
M3 - Conference contribution
AN - SCOPUS:85208200631
SN - 9781577358848
T3 - Proceedings of the AAAI Conference on Human Computation and Crowdsourcing, HCOMP
SP - 163
EP - 174
BT - HCOMP 2023 - Proceedings of the 11th AAAI Conference on Human Computation and Crowdsourcing
A2 - Bernstein, M.
A2 - Bozzon, A.
PB - Association for the Advancement of Artificial Intelligence
T2 - 11th AAAI Conference on Human Computation and Crowdsourcing, HCOMP 2023
Y2 - 6 November 2023 through 9 November 2023
ER -