In the NLP field, sequence to sequence models have many applications, such as machine translation, auto-answering robots, etc. After reading the relevant papers, I began to study the source code provided by TensorFlow, just beginning to look very obscure, and now basically understand, I am here to introduce the theory of sequence-to-sequence models, and then the source code to explain, Also is to own this two weeks of study to summarize, if also can help you, that is better than the ~ sequence-to-sequence model
The most common model in NLP is language, whose research object is a single sequence, while the sequence to sequence model in this paper studies two sequences at the same time. The classic Sequence-to-sequence model consists of two RNN networks, one called "encoder" and the other called "decoder", the former responsible for encoding the variable-length sequence into a fixed-length vector representation, The latter is responsible for decoding the fixed_length vector representation into the variable-length output, its basic network structure is as follows,
Each of these small circles represents a cell, such as Grucell, Lstmcell, Multi-layer-grucell, Multi-layer-grucell, and so on. The more intuitive explanation here is that encoder's final hidden state C contains all the information for the input sequence, so C can be used to decode the output. Although there are weights shared within the "encoder" or "decoder", there are generally different sets of parameters between encoder and decoder. When training the sequence-to-sequence model, it is similar to supervised learning model, maximizing the objective function Θ∗=argmaxθ∑n=n∑t=1tnlogp (YNT|YN<T,XN) \theta^{*}=arg\max_{\ Theta}\sum_{n=}^{n}\sum_{t=1}^{t_{n}}logp (y_{t}^{n}|y_{where P (yt|y1,.., yt−1,c) =g (yt−1,st,c) =1Zexp (WTtϕ (yt−1,zt,ct ) +BT) p ({y_{t}|y_{1},.., y_{t-1}}, c) =g (y_{t-1},s_{t},c) =\frac{1}{z}exp (W_{t}^{t}\phi (Y_{t-1},z_{t},c_{t}) +b_{t}) where WT W_{t} is called an output projection, BT B_{t} is called an output bias, and the normalized constant is calculated as Z=∑k:y