TY - GEN
T1 - Multi-grained named entity recognition
AU - Xia, Congying
AU - Zhang, Chenwei
AU - Yang, Tao
AU - Li, Yaliang
AU - Du, Nan
AU - Wu, Xian
AU - Fan, Wei
AU - Ma, Fenglong
AU - Yu, Philip
N1 - Funding Information:
We thank the reviewers for their valuable comments. Special thanks go to Lu Wei from Singapore University of Technology and Design for sharing the datasets split details. This work is supported in part by NSF through grants IIS-1526499, IIS-1763325, and CNS-1626432.
Publisher Copyright:
© 2019 Association for Computational Linguistics
PY - 2020
Y1 - 2020
N2 - This paper presents a novel framework, MGNER, for Multi-Grained Named Entity Recognition where multiple entities or entity mentions in a sentence could be non-overlapping or totally nested. Different from traditional approaches regarding NER as a sequential labeling task and annotate entities consecutively, MGNER detects and recognizes entities on multiple granularities: it is able to recognize named entities without explicitly assuming non-overlapping or totally nested structures. MGNER consists of a Detector that examines all possible word segments and a Classifier that categorizes entities. In addition, contextual information and a self-attention mechanism are utilized throughout the framework to improve the NER performance. Experimental results show that MGNER outperforms current state-of-the-art baselines up to 4.4% in terms of the F1 score among nested/non-overlapping NER tasks.
AB - This paper presents a novel framework, MGNER, for Multi-Grained Named Entity Recognition where multiple entities or entity mentions in a sentence could be non-overlapping or totally nested. Different from traditional approaches regarding NER as a sequential labeling task and annotate entities consecutively, MGNER detects and recognizes entities on multiple granularities: it is able to recognize named entities without explicitly assuming non-overlapping or totally nested structures. MGNER consists of a Detector that examines all possible word segments and a Classifier that categorizes entities. In addition, contextual information and a self-attention mechanism are utilized throughout the framework to improve the NER performance. Experimental results show that MGNER outperforms current state-of-the-art baselines up to 4.4% in terms of the F1 score among nested/non-overlapping NER tasks.
UR - http://www.scopus.com/inward/record.url?scp=85082145682&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=85082145682&partnerID=8YFLogxK
M3 - Conference contribution
AN - SCOPUS:85082145682
T3 - ACL 2019 - 57th Annual Meeting of the Association for Computational Linguistics, Proceedings of the Conference
SP - 1430
EP - 1440
BT - ACL 2019 - 57th Annual Meeting of the Association for Computational Linguistics, Proceedings of the Conference
PB - Association for Computational Linguistics (ACL)
T2 - 57th Annual Meeting of the Association for Computational Linguistics, ACL 2019
Y2 - 28 July 2019 through 2 August 2019
ER -