TY - GEN
T1 - Task-Specific attentive pooling of phrase alignments contributes to sentence matching
AU - Yin, Wenpeng
AU - Schütze, Hinrich
N1 - Publisher Copyright:
© 2017 Association for Computational Linguistics.
PY - 2017
Y1 - 2017
N2 - This work studies comparatively two typical sentence matching tasks: Textual entailment (TE) and answer selection (AS), observing that weaker phrase alignments are more critical in TE, while stronger phrase alignments deserve more attention in AS. The key to reach this observation lies in phrase detection, phrase representation, phrase alignment, and more importantly how to connect those aligned phrases of different matching degrees with the final classifier. Prior work (i) has limitations in phrase generation and representation, or (ii) conducts alignment at word and phrase levels by handcrafted features or (iii) utilizes a single framework of alignment without considering the characteristics of specific tasks, which limits the framework's effectiveness across tasks. We propose an architecture based on Gated Recurrent Unit that supports (i) representation learning of phrases of arbitrary granularity and (ii) task-specific attentive pooling of phrase alignments between two sentences. Experimental results on TE and AS match our observation and show the effectiveness of our approach.
AB - This work studies comparatively two typical sentence matching tasks: Textual entailment (TE) and answer selection (AS), observing that weaker phrase alignments are more critical in TE, while stronger phrase alignments deserve more attention in AS. The key to reach this observation lies in phrase detection, phrase representation, phrase alignment, and more importantly how to connect those aligned phrases of different matching degrees with the final classifier. Prior work (i) has limitations in phrase generation and representation, or (ii) conducts alignment at word and phrase levels by handcrafted features or (iii) utilizes a single framework of alignment without considering the characteristics of specific tasks, which limits the framework's effectiveness across tasks. We propose an architecture based on Gated Recurrent Unit that supports (i) representation learning of phrases of arbitrary granularity and (ii) task-specific attentive pooling of phrase alignments between two sentences. Experimental results on TE and AS match our observation and show the effectiveness of our approach.
UR - http://www.scopus.com/inward/record.url?scp=85021676817&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=85021676817&partnerID=8YFLogxK
U2 - 10.18653/v1/e17-1066
DO - 10.18653/v1/e17-1066
M3 - Conference contribution
AN - SCOPUS:85021676817
T3 - 15th Conference of the European Chapter of the Association for Computational Linguistics, EACL 2017 - Proceedings of Conference
SP - 699
EP - 709
BT - Long Papers - Continued
PB - Association for Computational Linguistics (ACL)
T2 - 15th Conference of the European Chapter of the Association for Computational Linguistics, EACL 2017
Y2 - 3 April 2017 through 7 April 2017
ER -