TY - GEN
T1 - Good Data, Large Data, or No Data? Comparing Three Approaches in Developing Research Aspect Classifiers for Biomedical Papers
AU - Chandrasekhar, Shreya
AU - Huang, Chieh Yang
AU - Huang, Ting Hao
N1 - Publisher Copyright:
© 2023 Association for Computational Linguistics.
PY - 2023
Y1 - 2023
N2 - The rapid growth of scientific publications, particularly during the COVID-19 pandemic, emphasizes the need for tools to help researchers efficiently comprehend the latest advancements. One essential part of understanding scientific literature is research aspect classification, which categorizes sentences in abstracts to Background, Purpose, Method, and Finding. In this study, we investigate the impact of different datasets on model performance for the crowd-annotated CODA-19 research aspect classification task. Specifically, we explore the potential benefits of using the large, automatically curated PubMed 200K RCT dataset and evaluate the effectiveness of large language models (LLMs), such as LLaMA, GPT-3, ChatGPT, and GPT-4. Our results indicate that using the PubMed 200K RCT dataset does not improve performance for the CODA-19 task. We also observe that while GPT-4 performs well, it does not outperform the SciBERT model fine-tuned on the CODA-19 dataset, emphasizing the importance of a dedicated and task-aligned datasets dataset for the target task. Our code is available at https://github.com/Crowd-AI-Lab/CODA-19-exp.
AB - The rapid growth of scientific publications, particularly during the COVID-19 pandemic, emphasizes the need for tools to help researchers efficiently comprehend the latest advancements. One essential part of understanding scientific literature is research aspect classification, which categorizes sentences in abstracts to Background, Purpose, Method, and Finding. In this study, we investigate the impact of different datasets on model performance for the crowd-annotated CODA-19 research aspect classification task. Specifically, we explore the potential benefits of using the large, automatically curated PubMed 200K RCT dataset and evaluate the effectiveness of large language models (LLMs), such as LLaMA, GPT-3, ChatGPT, and GPT-4. Our results indicate that using the PubMed 200K RCT dataset does not improve performance for the CODA-19 task. We also observe that while GPT-4 performs well, it does not outperform the SciBERT model fine-tuned on the CODA-19 dataset, emphasizing the importance of a dedicated and task-aligned datasets dataset for the target task. Our code is available at https://github.com/Crowd-AI-Lab/CODA-19-exp.
UR - http://www.scopus.com/inward/record.url?scp=85174528248&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=85174528248&partnerID=8YFLogxK
U2 - 10.18653/v1/2023.bionlp-1.8
DO - 10.18653/v1/2023.bionlp-1.8
M3 - Conference contribution
AN - SCOPUS:85174528248
T3 - Proceedings of the Annual Meeting of the Association for Computational Linguistics
SP - 103
EP - 113
BT - BioNLP 2023 - BioNLP and BioNLP-ST, Proceedings of the Workshop
A2 - Demner-fushman, Dina
A2 - Ananiadou, Sophia
A2 - Cohen, Kevin
PB - Association for Computational Linguistics (ACL)
T2 - 22nd Workshop on Biomedical Natural Language Processing and BioNLP Shared Tasks, BioNLP 2023
Y2 - 13 July 2023
ER -