The last mention of a good learning chat robot resources, do not know whether the small partners have to learn it.
DIY Chat Robot Tutorial
I've been learning a little every day lately and sharing it with you.
structure of this paper: The architecture diagram of the chat robot how to prepare the Chatbot model with TensorFlow to chatbot training data chatbot Source code interpretation 1. Schematic diagram of the chat robot
Learning Resources:
[do-it-Yourself chat robot nine-what the chat robot should do]
(http://www.shareditor.com/blogshow/?blogId=73)
The working process of the chat robot is as follows: question-retrieval-answer extraction.
question: is to analyze the host's questions in the key words, question type, there is really want to know things.
Search: According to the previous step of analysis, to find the answer.
Answer extraction: Find the answer, and can not be directly applied, but also to organize into a really useful, can be used as an answer to the answer.
The key technologies involved are shown in the figure.
If you can't see the picture, you are purple:
Question analysis:
Chinese word segmentation, pos tagging, entity tagging, concept category tagging, syntactic analysis, semantic analysis, logical structure tagging, anaphora resolution, correlation tagging, question classification, answer category determination;
Mass Text knowledge representation:
Network Text resource acquisition, machine learning method, large scale semantic computation and inference, knowledge representation system and knowledge base construction
answer generation and filtering:
Candidate answer extraction, relationship deduction, coincidence degree judgment, noise filtering 2. The model of realizing chatbot with TensorFlow
Previously, according to Siraj Video wrote a "do-it-yourself to write a chat robot bar",
The article only wrote the main function of the simple process: data-model-training, is implemented in Lua, detailed code can go to his github to learn
The following article is implemented using the TensorFlow + Tflearn library, where you can learn more about modeling, training, and forecasting :
Learning resources: do-it-Yourself Chat Robot 38-The original chat robot is so made out
The common denominator of the two articles is that they are all implemented with SEQ2SEQ.
The LSTM model structure is:
The details can go directly to the above text, where the PO out of the model stage to establish a brief flow diagram and process description:
First of the original data 300w chat to do a preprocessing, that is, cut words, divided into questions and answers.
Then we use Word2vec to train the word vector to generate the binary word vector file.
Pass as Input data X to the following process:
Question into the LSTM encoder link, answer into the decoder link,
Generates output tensor, respectively.
Where decoder is the result of a word that adds all the results to a list.
Finally and encoder output, together as the next link regression input, and the incoming DNN network.
3. How to prepare the Chatbot training data
Learning Resources:
Do-it-Yourself Chat Robot 38-The original chat robot is doing so
the training data generation process is as follows: first read each row in input file, and split into question and answer sentences according to ' | '. Each sentence converts word into a word vector by Word2vec. The vector sequence of each sentence is converted to the same dimension: Self.word_vec_dim * Self.max_seq_len The last answer form the Y data, question+answer the XY data, and is then put into the model to train:
Model.fit (Trainxy, Trainy, n_epoch=1000, Snapshot_epoch=false, batch_size=1)
The code is as follows:
def init_seq (input_file): "" "Read the cut word text file, load all word sequences" "" File_object = Open (Input_file, ' r ') Vocab_dict = {} while true:question_seq = [] Answer_seq = [] line = File_object.readli
NE () If line:line_pair = Line.split (' | ') Line_question = line_pair[0] Line_answer = line_pair[1] for word in Line_question.decode (' Utf-8 '). Split ('): if Word_vector_dict.has_key (word): Question_seq.append (Word_vector_dict[wo
RD]) for word in Line_answer.decode (' Utf-8 '). Split ('): if Word_vector_dict.has_key (word): Answer_seq.append (Word_vector_dict[word]) else:break question_seqs.append (q UESTION_SEQ) answer_seqs.append (ANSWER_SEQ) file_object.close ()
def generate_trainig_data (self):
xy_data = []
y_data = [] for
I in range (len (question_seqs)):
question _seq = question_seqs[i]
answer_seq = answer_seqs[i]
if Len (QUESTION_SEQ) < Self.max_seq_len and Len (answer_ SEQ) < self.max_seq_len:
sequence_xy = [Np.zeros (Self.word_vec_dim)] * (Self.max_seq_len-len (QUESTION_SEQ)) + List (Reversed (QUESTION_SEQ))
sequence_y = Answer_seq + [Np.zeros (Self.word_vec_dim)] * (Self.max_seq_len-len ( ANSWER_SEQ))
Sequence_xy = sequence_xy + sequence_y
sequence_y = [Np.ones (Self.word_vec_dim)] + sequence_y
xy_data.append (sequence_xy)
y_data.append (sequence_y) return
Np.array (xy_data), Np.array (Y_data)
4. Chatbot Source Code Interpretation
Learning Resources:
Do-it-Yourself Chat Robot 38-The original chat robot is doing so
This article on the GitHub on the source code:
refine the steps as follows:
2 of them. Prepare the data, 3. Building a model is the part that is highlighted above. Introducing package preparation data to establish model training forecast 1. Introducing Packages
Import sys
import math
import tflearn
import TensorFlow as TF from
tensorflow.python.ops import rnn_ Cell from
tensorflow.python.ops import rnn import
chardet
import numpy as NP
import struct
2. Preparing Data
def load_word_set ()
The 30 million corpus is divided into question and Answer parts to extract word.
Def load_word_set ():
file_object = open ('./segment_result_lined.3000000.pair.less ', ' R ') while
True:
line = File_object.readline ()
if line:
Line_pair = line.split (' | ')
Line_question = line_pair[0]
line_answer = line_pair[1] for
word in Line_question.decode (' Utf-8 '). Split ("): C8/>word_set[word] = 1 for
word in Line_answer.decode (' Utf-8 '). Split ('):
Word_set[word] = 1
Else:
break
File_object.close ()
def load_vectors (input)
Load word vector from Vectors.bin, return a word_vector_dict dictionary, key is word, value is 200-D vector.
def init_seq (Input_file)
The word vectors corresponding to the words in question and Answer are question_seqs in the word vector sequence, answer_seqs.
def init_seq (input_file): "" "
read the cut word text file, load all word sequences" ""
file_object = open (Input_file, ' R ')
vocab_dict = {} while
True:
question_seq = []
answer_seq = [] Line
= File_object.readline ()
if line:
Line_ Pair = Line.split (' | ')
Line_question = line_pair[0]
line_answer = line_pair[1] for
word in Line_question.decode (' Utf-8 '). Split (") :
if Word_vector_dict.has_key (word):
question_seq.append (Word_vector_dict[word]) for
word in Line_ Answer.decode (' Utf-8 '). Split ('):
if Word_vector_dict.has_key (word):
answer_seq.append (word_vector_ Dict[word]
else: Break
question_seqs.append (question_seq)
answer_seqs.append (ANSWER_SEQ)
File_object.close ()
def vector_sqrtlen (Vector)
Used to calculate the length of a vector.
def vector_sqrtlen (vector):
len = 0 for
item in vector:
len = + Item * Item
len = math.sqrt (len)
return Len
def vector_cosine (v1, v2)
Used to find the distance between two vectors.
def vector_cosine (v1, v2):
If Len (v1)!= len (v2):
sys.exit (1)
sqrtlen1 = Vector_sqrtlen (v1)
sqrtlen2 = Vector_sqrtlen (v2)
value = 0
for item1, item2 in Zip (v1, v2):
value + = item1 * item2 return
value/(sq RTLEN1*SQRTLEN2)
def vector2word (Vector)
Given a word vector, go to the Word-vector dictionary to find the vector closest to this vector, and memorize the corresponding words, return the words and cosine values.
def vector2word (vector):
max_cos = -10000
Match_word = ' for
word in word_vector_dict:
v = word_ Vector_dict[word]
cosine = vector_cosine (vector, v)
if cosine > max_cos:
max_cos = cosine
match _word = Word return
(Match_word, Max_cos)
3. Building Models
Class Myseq2seq (Object)
These two pieces were written separately in the first two notes.
def generate_trainig_data (self)
By Question_seqs, Answer_seqs get xy_data and Y_data form.
def model (self, feed_previous=false)
Use input data to generate encoder_inputs and decoder_inputs with Go headers.
Passes the encoder_inputs to the encoder, returns an output (the first value of the predictive sequence) and a state (passed to the decoder).
In the decoder, the last output of the encoder is used as the first input, and the prediction process uses the output of the previous time sequence as input for the next time sequence. 4. Training
Def train (self)
Generate X y data with Generate_trainig_data (), pass it to the model defined above, and train Model.fit to save.
Def train (self):
trainxy, trainy = Self.generate_trainig_data ()
model = Self.model (feed_previous=false)
Model.fit (Trainxy, Trainy, n_epoch=1000, Snapshot_epoch=false, batch_size=1)
model.save ('./model/model ')
Return model
5. Forecast
Using Generate_trainig_data () to generate data using model.predict for prediction, each sample of predict results is equivalent to the word vector sequence of a sentence, each vector in the sample is Word-vector The dictionary finds its nearest vector and returns the corresponding word, and the cosine between them.
if __name__ = = ' __main__ ':
phrase = sys.argv[1]
If 3 = Len (sys.argv):
my_seq2seq = Myseq2seq (word_vec_dim= Word_vec_dim, Max_seq_len=max_seq_len, input_file=sys.argv[2]
else:
my_seq2seq = Myseq2seq (word_vec_dim= Word_vec_dim, Max_seq_len=max_seq_len)
if phrase = = ' train ':
my_seq2seq.train ()
else:
model = My_ Seq2seq.load ()
trainxy, trainy = My_seq2seq.generate_trainig_data ()
predict = Model.predict (TrainXY)
For sample in predict:
print "predict Answer" for
W in Sample[1:]:
(Match_word, Max_cos) = Vector2word (W) c15/> #if Vector_sqrtlen (W) < 1:
# break
print Match_word, Max_cos, Vector_sqrtlen (W)
Recommended reading
History Technology Blog Link Rollup
Maybe you can find what you want