TY - JOUR
T1 - Learning simpler language models with the differential state framework
AU - Ororbia, Alexander G.
AU - Mikolov, Tomas
AU - Reitter, David
N1 - Funding Information:
We thank C. Lee Giles and Prasenjit Mitra for their advice. We thank NVIDIA for providing GPU hardware that supported this letter. A.O. was funded by a NACME-Sloan scholarship; D.R. acknowledges funding from NSF IIS-1459300.
Publisher Copyright:
© 2017 Massachusetts Institute of Technology.
PY - 2017/12/1
Y1 - 2017/12/1
N2 - Learning useful information across long time lags is a critical and difficult problem for temporal neural models in tasks such as language modeling. Existing architectures that address the issue are often complex and costly to train. The differential state framework (DSF) is a simple and high-performing design that unifies previously introduced gated neural models. DSF models maintain longer-term memory by learning to interpolate between a fast-changing data-driven representation and a slowly changing, implicitly stable state.Within theDSF framework, a new architecture is presented, the delta-RNN. This model requires hardly any more parameters than a classical, simple recurrent network. In language modeling at the word and character levels, the delta-RNN outperforms popular complex architectures, such as the long short-term memory (LSTM) and the gated recurrent unit (GRU), and, when regularized, performs comparably to several state-of-the-art baselines. At the subword level, the delta-RNN's performance is comparable to that of complex gated architectures.
AB - Learning useful information across long time lags is a critical and difficult problem for temporal neural models in tasks such as language modeling. Existing architectures that address the issue are often complex and costly to train. The differential state framework (DSF) is a simple and high-performing design that unifies previously introduced gated neural models. DSF models maintain longer-term memory by learning to interpolate between a fast-changing data-driven representation and a slowly changing, implicitly stable state.Within theDSF framework, a new architecture is presented, the delta-RNN. This model requires hardly any more parameters than a classical, simple recurrent network. In language modeling at the word and character levels, the delta-RNN outperforms popular complex architectures, such as the long short-term memory (LSTM) and the gated recurrent unit (GRU), and, when regularized, performs comparably to several state-of-the-art baselines. At the subword level, the delta-RNN's performance is comparable to that of complex gated architectures.
UR - http://www.scopus.com/inward/record.url?scp=85035755417&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=85035755417&partnerID=8YFLogxK
U2 - 10.1162/NECO_a_01017
DO - 10.1162/NECO_a_01017
M3 - Letter
C2 - 28957029
AN - SCOPUS:85035755417
SN - 0899-7667
VL - 29
SP - 3327
EP - 3352
JO - Neural computation
JF - Neural computation
IS - 12
ER -