TY - GEN
T1 - If in a Crowdsourced Data Annotation Pipeline, a GPT-4
AU - He, Zeyu
AU - Huang, Chieh Yang
AU - Ding, Chien Kuang Cornelia
AU - Rohatgi, Shaurya
AU - Huang, Ting Hao
N1 - Publisher Copyright:
© 2024 Copyright held by the owner/author(s)
PY - 2024/5/11
Y1 - 2024/5/11
N2 - Recent studies indicated GPT-4 outperforms online crowd workers in data labeling accuracy, notably workers from Amazon Mechanical Turk (MTurk). However, these studies were criticized for deviating from standard crowdsourcing practices and emphasizing individual workers' performances over the whole data-annotation process. This paper compared GPT-4 and an ethical and well-executed MTurk pipeline, with 415 workers labeling 3,177 sentence segments from 200 scholarly articles using the CODA-19 scheme. Two worker interfaces yielded 127,080 labels, which were then used to infer the final labels through eight label-aggregation algorithms. Our evaluation showed that despite best practices, MTurk pipeline's highest accuracy was 81.5%, whereas GPT-4 achieved 83.6%. Interestingly, when combining GPT-4's labels with crowd labels collected via an advanced worker interface for aggregation, 2 out of the 8 algorithms achieved an even higher accuracy (87.5%, 87.0%). Further analysis suggested that, when the crowd's and GPT-4's labeling strengths are complementary, aggregating them could increase labeling accuracy.
AB - Recent studies indicated GPT-4 outperforms online crowd workers in data labeling accuracy, notably workers from Amazon Mechanical Turk (MTurk). However, these studies were criticized for deviating from standard crowdsourcing practices and emphasizing individual workers' performances over the whole data-annotation process. This paper compared GPT-4 and an ethical and well-executed MTurk pipeline, with 415 workers labeling 3,177 sentence segments from 200 scholarly articles using the CODA-19 scheme. Two worker interfaces yielded 127,080 labels, which were then used to infer the final labels through eight label-aggregation algorithms. Our evaluation showed that despite best practices, MTurk pipeline's highest accuracy was 81.5%, whereas GPT-4 achieved 83.6%. Interestingly, when combining GPT-4's labels with crowd labels collected via an advanced worker interface for aggregation, 2 out of the 8 algorithms achieved an even higher accuracy (87.5%, 87.0%). Further analysis suggested that, when the crowd's and GPT-4's labeling strengths are complementary, aggregating them could increase labeling accuracy.
UR - http://www.scopus.com/inward/record.url?scp=85194813653&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=85194813653&partnerID=8YFLogxK
U2 - 10.1145/3613904.3642834
DO - 10.1145/3613904.3642834
M3 - Conference contribution
AN - SCOPUS:85194813653
T3 - Conference on Human Factors in Computing Systems - Proceedings
BT - CHI 2024 - Proceedings of the 2024 CHI Conference on Human Factors in Computing Sytems
PB - Association for Computing Machinery
T2 - 2024 CHI Conference on Human Factors in Computing Sytems, CHI 2024
Y2 - 11 May 2024 through 16 May 2024
ER -