TY - GEN
T1 - End-task oriented textual entailment via deep explorations of inter-sentence interactions
AU - Yin, Wenpeng
AU - Schütze, Hinrich
AU - Roth, Dan
N1 - Publisher Copyright:
© 2018 Association for Computational Linguistics
PY - 2018
Y1 - 2018
N2 - This work deals with SCITAIL, a natural entailment challenge derived from a multi-choice question answering problem. The premises and hypotheses in SCITAIL were generated with no awareness of each other, and did not specifically aim at the entailment task. This makes it more challenging than other entailment data sets and more directly useful to the end-task – question answering. We propose DEISTE (deep explorations of inter-sentence interactions for textual entailment) for this entailment task. Given word-to-word interactions between the premise-hypothesis pair (P, H), DEISTE consists of: (i) a parameter-dynamic convolution to make important words in P and H play a dominant role in learnt representations; and (ii) a position-aware attentive convolution to encode the representation and position information of the aligned word pairs. Experiments show that DEISTE gets ≈5% improvement over prior state of the art and that the pretrained DEISTE on SCITAIL generalizes well on RTE-5.1
AB - This work deals with SCITAIL, a natural entailment challenge derived from a multi-choice question answering problem. The premises and hypotheses in SCITAIL were generated with no awareness of each other, and did not specifically aim at the entailment task. This makes it more challenging than other entailment data sets and more directly useful to the end-task – question answering. We propose DEISTE (deep explorations of inter-sentence interactions for textual entailment) for this entailment task. Given word-to-word interactions between the premise-hypothesis pair (P, H), DEISTE consists of: (i) a parameter-dynamic convolution to make important words in P and H play a dominant role in learnt representations; and (ii) a position-aware attentive convolution to encode the representation and position information of the aligned word pairs. Experiments show that DEISTE gets ≈5% improvement over prior state of the art and that the pretrained DEISTE on SCITAIL generalizes well on RTE-5.1
UR - http://www.scopus.com/inward/record.url?scp=85061713239&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=85061713239&partnerID=8YFLogxK
U2 - 10.18653/v1/p18-2086
DO - 10.18653/v1/p18-2086
M3 - Conference contribution
AN - SCOPUS:85061713239
T3 - ACL 2018 - 56th Annual Meeting of the Association for Computational Linguistics, Proceedings of the Conference (Long Papers)
SP - 540
EP - 545
BT - ACL 2018 - 56th Annual Meeting of the Association for Computational Linguistics, Proceedings of the Conference (Short Papers)
PB - Association for Computational Linguistics (ACL)
T2 - 56th Annual Meeting of the Association for Computational Linguistics, ACL 2018
Y2 - 15 July 2018 through 20 July 2018
ER -