TY - GEN
T1 - Unity in Diversity
T2 - 62nd Annual Meeting of the Association for Computational Linguistics, ACL 2024
AU - Wang, Xiaochen
AU - Luo, Junyu
AU - Wang, Jiaqi
AU - Zhong, Yuan
AU - Zhang, Xiaokun
AU - Wang, Yaqing
AU - Bhatia, Parminder
AU - Xiao, Cao
AU - Ma, Fenglong
N1 - Publisher Copyright:
© 2024 Association for Computational Linguistics.
PY - 2024
Y1 - 2024
N2 - Although pre-training has become a prevalent approach for addressing various biomedical tasks, the current efficacy of pre-trained models is hindered by their reliance on a limited scope of medical sources. This limitation results in data scarcity during pre-training and restricts the range of applicable downstream tasks. In response to these challenges, we develop Medical Cross-Source Pre-training (MEDCSP), a new pre-training strategy designed to bridge the gap between multimodal medical sources. MEDCSP employs modality-level aggregation to unify patient data within individual sources. Additionally, leveraging temporal information and diagnosis history, MEDCSP effectively captures explicit and implicit correlations between patients across different sources. To evaluate the proposed strategy, we conduct comprehensive experiments, where the experiments are based on 6 modalities from 2 real-world medical data sources, and MEDCSP is evaluated on 4 tasks against 19 baselines, marking an initial yet essential step towards cross-source modeling in the medical domain.
AB - Although pre-training has become a prevalent approach for addressing various biomedical tasks, the current efficacy of pre-trained models is hindered by their reliance on a limited scope of medical sources. This limitation results in data scarcity during pre-training and restricts the range of applicable downstream tasks. In response to these challenges, we develop Medical Cross-Source Pre-training (MEDCSP), a new pre-training strategy designed to bridge the gap between multimodal medical sources. MEDCSP employs modality-level aggregation to unify patient data within individual sources. Additionally, leveraging temporal information and diagnosis history, MEDCSP effectively captures explicit and implicit correlations between patients across different sources. To evaluate the proposed strategy, we conduct comprehensive experiments, where the experiments are based on 6 modalities from 2 real-world medical data sources, and MEDCSP is evaluated on 4 tasks against 19 baselines, marking an initial yet essential step towards cross-source modeling in the medical domain.
UR - https://www.scopus.com/pages/publications/85204426967
UR - https://www.scopus.com/inward/citedby.url?scp=85204426967&partnerID=8YFLogxK
U2 - 10.18653/v1/2024.acl-long.199
DO - 10.18653/v1/2024.acl-long.199
M3 - Conference contribution
C2 - 40255468
AN - SCOPUS:85204426967
T3 - Proceedings of the Annual Meeting of the Association for Computational Linguistics
SP - 3644
EP - 3656
BT - Long Papers
A2 - Ku, Lun-Wei
A2 - Martins, Andre F. T.
A2 - Srikumar, Vivek
PB - Association for Computational Linguistics (ACL)
Y2 - 11 August 2024 through 16 August 2024
ER -