Deep learning to do the NLP method, basically is the sentence participle first, and then each word into the corresponding word vector sequence. (https://kexue.fm/archives/4765)
The first idea is the RNN layer, which is recursive, but RNN is unable to learn the overall structure information well, because it is essentially a Markov decision-making process.
The second idea is the CNN layer, in fact, CNN's plan is also very natural, window-type traversal, such as the size of 3 convolution, is
In Facebook's paper, the sheer use of convolution also completes Seq2seq's learning, a refined and extreme use case for convolution, which makes it easy to parallel and captures some of the global structural information
Google's masterpiece offers a third idea : Pure attention! Just focus on it! RNN to gradually recursive to obtain global information, so generally to two-way rnn is better; CNN in fact can only get local information, is through the cascade to increase the sense of the wild, attention the most brutal idea, it one step get the global information! Its solution is to:
About the attention mechanism ("Attention is all Need")