In 2017 years, there were two articles similar to the one I also admired, namely Facebook's "convolutional Sequence to Sequence learning" and Google's "Attention are all need" , they are seq2seq on the innovation, in essence, are abandoning the RNN structure to do seq2seq task.
In this blog post, the author makes a simple analysis of "Attention is all need". Of course, the two papers themselves are relatively fire, so there is a lot of interpretation on the Internet (but many of the interpretation is directly translated papers, few of their own understanding), so here as much of their own text, try not to repeat the online you have said the content of the big guy. I. Sequence encoding
Deep learning to do the NLP method, basically is the sentence participle first, and then each word into the corresponding word vector sequence. In this way, each sentence corresponds to a matrix x= (x1,x2,..., xt) x= (x1,x2,..., xt), where Xi Xi represents the word vector (line vector) of the word I I, and the dimension is D D, so x∈rnxd x∈rnxd. In this case, the problem becomes encoded in these sequences.
The first basic idea is the RNN layer, the RNN scheme is very simple, recursive type:
Yt=f (YT−1,XT) yt=f (YT−1,XT)
Whether the lstm, GRU or the recent SRU, which has been widely used, is not divorced from the recursive framework. The RNN structure itself is relatively simple and suitable for sequence modeling, but one of the obvious drawbacks of RNN is that it cannot be parallel, so the speed is slow, which is a natural defect of recursion. In addition, I personally feel that RNN can not learn the overall structure of the information very well, because it is essentially a Markov decision-making process.
The second idea is the CNN layer, in fact, CNN's plan is also very natural, window-type traversal, such as the size of 3 convolution, is