Sequence-to-sequence Sample code in TensorFlow

Source: Internet
Author: User

In the NLP field, sequence to sequence models have many applications, such as machine translation, auto-answering robots, etc. After reading the relevant papers, I began to study the source code provided by TensorFlow, just beginning to look very obscure, and now basically understand, I am here to introduce the theory of sequence-to-sequence models, and then the source code to explain, Also is to own this two weeks of study to summarize, if also can help you, that is better than the ~ sequence-to-sequence model

The most common model in NLP is language, whose research object is a single sequence, while the sequence to sequence model in this paper studies two sequences at the same time. The classic Sequence-to-sequence model consists of two RNN networks, one called "encoder" and the other called "decoder", the former responsible for encoding the variable-length sequence into a fixed-length vector representation, The latter is responsible for decoding the fixed_length vector representation into the variable-length output, its basic network structure is as follows,

Each of these small circles represents a cell, such as Grucell, Lstmcell, Multi-layer-grucell, Multi-layer-grucell, and so on. The more intuitive explanation here is that encoder's final hidden state C contains all the information for the input sequence, so C can be used to decode the output. Although there are weights shared within the "encoder" or "decoder", there are generally different sets of parameters between encoder and decoder. When training the sequence-to-sequence model, it is similar to supervised learning model, maximizing the objective function Θ∗=argmaxθ∑n=n∑t=1tnlogp (YNT|YN<T,XN) \theta^{*}=arg\max_{\ Theta}\sum_{n=}^{n}\sum_{t=1}^{t_{n}}logp (y_{t}^{n}|y_{where P (yt|y1,.., yt−1,c) =g (yt−1,st,c) =1Zexp (WTtϕ (yt−1,zt,ct   ) +BT) p ({y_{t}|y_{1},.., y_{t-1}}, c) =g (y_{t-1},s_{t},c) =\frac{1}{z}exp (W_{t}^{t}\phi (Y_{t-1},z_{t},c_{t}) +b_{t}) where WT W_{t} is called an output projection, BT B_{t} is called an output bias, and the normalized constant is calculated as Z=∑k:y

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.