These two days in the attention model, looked at the next several answers, many people have recommended an article neural machine translation by jointly learning to Align and Translate I looked down, The feeling is very good, inside also probably elaborated the Encoder-decoder (coding) model concept, as well as the traditional RNN realization. Then also elaborated own attention model. I looked at it and made some excerpts of it, and wrote it in the following 1.encoder-decoder model and implementation of RNN
The so-called Encoder-decoder model, also known as the coding-decoding model. This is a model applied to the SEQ2SEQ problem.
So what is seq2seq again? Simply put, a second output sequence Y is generated based on an input sequence x. Seq2seq has a lot of applications, such as translation, document extraction, question answering system and so on. In translation, the input sequence is the text to be translated, the output sequence is the translated text, and in the question and answer system, the input sequence is the problem, and the output sequence is the answer.
In order to solve the problem of seq2seq, some people put forward the Encoder-decoder model, namely the coding-decoding model. The so-called code, is to convert the input sequence into a fixed-length vector, decoding, is to convert the previously generated fixed vector into an output sequence.
Of course, this is only a general idea, the implementation of the time, the encoder and decoder are not fixed, optional cnn/rnn/birnn/gru/lstm and so on, you can freely combine. For example, you use BIRNN when you encode, use RNN when decoding, or use RNN when encoding, use lstm when decoding, and so on.
Here to facilitate the elaboration, the selection of encoding and decoding are RNN combination. In Rnn, the hidden state of the current time is determined by the state of the previous time and the current time input, which is
Ht=f (HT−1,XT)
After the hidden layers of each time period are obtained, the information of hidden layers is summarized and the final semantic vector is generated.
C=q (h1,h2,h3,..., hTx)
A simple method is to use the last hidden layer as the semantic vector C, i.e.
C=q (h1,h2,h3,..., H