The problem solved
Natural language Inference, judging whether a can infer B. Simply say whether the 2 sentence ab has the same meaning.
Method
Our natural language inference network consists of the following parts: input encoding (inputsEncoding), local inference model (nativeinference Modeling), and inferred compositing (inference Composition). The structure diagram looks like this:
Vertically, it shows the three main components of the system; horizontally, the left represents a sequence Nli model called Esim, and the right represents a tree-shaped lstm network with syntactic parsing information.
Input encoding
1 #Based on arxiv:1609.060382Q1 = Input (name='Q1', shape=(MaxLen,))3Q2 = Input (name='Q2', shape=(MaxLen,))4 5 #Embedding6embedding =create_pretrained_embedding (7Pretrained_embedding, mask_zero=False)8bn = Batchnormalization (axis=2)9q1_embed =bn (Embedding (Q1))Tenq2_embed =Bn (Embedding (Q2)) One A #Encode -Encode = bidirectional (LSTM (Lstm_dim, return_sequences=True)) -q1_encoded =encode (q1_embed) theq2_encoded = Encode (q2_embed)
There are 2 kinds of lstm:
A:sequential model's approach
Each word in the sentence has a word representation that contains the surrounding information
B:tree-lstm model's approach
Each node (phrase or sentence) in the tree has a vector representation of Htt
The introduction of tree-lstm needs to read the article:
[1] Improved semantic representations from tree-structured long short-term memory networks
[2] Natural Language inference by tree-based convolution and heuristic matching
[3] Long short-term memory over recursive structures
Part II: Local Inference Modeling
A:sequential model
A similar or opposite correspondence of two words
B:tree-lstm model
Two trees any two nodes using the formula (11) to calculate
"Enhanced LSTM for Natural Language Inference" (Natural language Inference)