Encoder-decoder model and attention model

Source: Internet
Author: User
Tags current time

These two days in the attention model, looked at the next several answers, many people have recommended an article neural machine translation by jointly learning to Align and Translate I looked down, The feeling is very good, inside also probably elaborated the Encoder-decoder (coding) model concept, as well as the traditional RNN realization. Then also elaborated own attention model. I looked at it and made some excerpts of it, and wrote it in the following 1.encoder-decoder model and implementation of RNN

The so-called Encoder-decoder model, also known as the coding-decoding model. This is a model applied to the SEQ2SEQ problem.

So what is seq2seq again? Simply put, a second output sequence Y is generated based on an input sequence x. Seq2seq has a lot of applications, such as translation, document extraction, question answering system and so on. In translation, the input sequence is the text to be translated, the output sequence is the translated text, and in the question and answer system, the input sequence is the problem, and the output sequence is the answer.

In order to solve the problem of seq2seq, some people put forward the Encoder-decoder model, namely the coding-decoding model. The so-called code, is to convert the input sequence into a fixed-length vector, decoding, is to convert the previously generated fixed vector into an output sequence.

Of course, this is only a general idea, the implementation of the time, the encoder and decoder are not fixed, optional cnn/rnn/birnn/gru/lstm and so on, you can freely combine. For example, you use BIRNN when you encode, use RNN when decoding, or use RNN when encoding, use lstm when decoding, and so on.

Here to facilitate the elaboration, the selection of encoding and decoding are RNN combination. In Rnn, the hidden state of the current time is determined by the state of the previous time and the current time input, which is
Ht=f (HT−1,XT)

After the hidden layers of each time period are obtained, the information of hidden layers is summarized and the final semantic vector is generated.
C=q (h1,h2,h3,..., hTx)

A simple method is to use the last hidden layer as the semantic vector C, i.e.
C=q (h1,h2,h3,..., H

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.