Abstract
We present a novel fine-tuning algorithm in a deep hybrid architecture for semisupervised text classification. During each increment of the online learning process, the fine-tuning algorithm serves as a top-down mechanism for pseudo-jointly modifying model parameters following a bottom-up generative learning pass. The resulting model, trained under what we call the Bottom-Up-Top-Down learning algorithm, is shown to outperform a variety of competitive models and baselines trained across a wide range of splits between supervised and unsupervised training data.
Original language | English (US) |
---|---|
Title of host publication | Conference Proceedings - EMNLP 2015: Conference on Empirical Methods in Natural Language Processing |
Publisher | Association for Computational Linguistics (ACL) |
Pages | 471-481 |
Number of pages | 11 |
ISBN (Electronic) | 9781941643327 |
State | Published - 2015 |
Event | Conference on Empirical Methods in Natural Language Processing, EMNLP 2015 - Lisbon, Portugal Duration: Sep 17 2015 → Sep 21 2015 |
Other
Other | Conference on Empirical Methods in Natural Language Processing, EMNLP 2015 |
---|---|
Country/Territory | Portugal |
City | Lisbon |
Period | 9/17/15 → 9/21/15 |
All Science Journal Classification (ASJC) codes
- Computational Theory and Mathematics
- Computer Science Applications
- Information Systems