Use TensorFlow to be a chat robot __tensorflow

Source: Internet
Author: User
Tags prepare readline knowledge base

Structure of this paper: The architecture diagram of the chat robot how to prepare the Chatbot model with TensorFlow to chatbot training data chatbot Source code interpretation 1. Schematic diagram of the chat robot

Learning Resources:
[do-it-Yourself chat robot nine-what the chat robot should do]
(http://www.shareditor.com/blogshow/?blogId=73)

The working process of the chat robot is as follows: question-retrieval-answer extraction.

Question: is to analyze the host's questions in the key words, question type, there is really want to know things.

Search: According to the previous step of analysis, to find the answer.

Answer extraction: Find the answer, and can not be directly applied, but also to organize into a really useful, can be used as an answer to the answer.

The key technologies involved are shown in the figure.

If you can't see the picture, you are purple:

Question Analysis:
Chinese word segmentation, pos tagging, entity tagging, concept category tagging, syntactic analysis, semantic analysis, logical structure tagging, anaphora resolution, correlation tagging, question classification, answer category determination;

Mass text Knowledge Representation:
Network Text resource acquisition, machine learning method, large scale semantic computation and inference, knowledge representation system and knowledge base construction

Answer generation and filtering:
Candidate answer extraction, relationship deduction, coincidence degree judgment, noise filtering 2. The model of realizing chatbot with TensorFlow

Previously, according to Siraj Video wrote a "do-it-yourself to write a chat robot bar",
The article only wrote the main function of the simple process: data-model-training, is implemented in Lua, detailed code can go to his github to learn

The following article is implemented using the TensorFlow + Tflearn library, where you can learn more about modeling, training, and forecasting:

Learning resources: do-it-Yourself Chat Robot 38-The original chat robot is so made out

The common denominator of the two articles is that they are all implemented with SEQ2SEQ.

The LSTM model structure is:

The details can go directly to the above text, where the PO out of the model stage to establish a brief flow diagram and process description:

First of the original data 300w chat to do a preprocessing, that is, cut words, divided into questions and answers.

Then we use Word2vec to train the word vector to generate the binary word vector file.

Pass as Input data X to the following process:

Question into the LSTM encoder link, answer into the decoder link,

Generates output tensor, respectively.

Where decoder is the result of a word that adds all the results to a list.

Finally and encoder output, together as the next link regression input, and the incoming DNN network.

3. How to prepare the Chatbot training data

Learning Resources:
Do-it-Yourself Chat Robot 38-The original chat robot is doing so

The training data generation process is as follows: first read each row in input file, and split into question and answer sentences according to ' | '. Each sentence converts word into a word vector by Word2vec. The vector sequence of each sentence is converted to the same dimension: Self.word_vec_dim * Self.max_seq_len The last answer form the Y data, question+answer the XY data, and is then put into the model to train:

Model.fit (Trainxy, Trainy, n_epoch=1000, Snapshot_epoch=false, batch_size=1)
1 1

The code is as follows:

def init_seq (input_file): "" "Read the cut word text file, load all word sequences" "" File_object = Open (Input_file, ' r ') vocab_dict = {}
            While true:question_seq = [] Answer_seq = [] line = File_object.readline () if line:
            Line_pair = Line.split (' | ') Line_question = line_pair[0] Line_answer = line_pair[1] for word in Line_question.decode (' Utf-8 '). Split ('): if Word_vector_dict.has_key (word): Question_seq.append (Word_vector_dict[wo
                    RD]) for word in Line_answer.decode (' Utf-8 '). Split ('): if Word_vector_dict.has_key (word): Answer_seq.append (Word_vector_dict[word]) else:break question_seqs.append (q
 UESTION_SEQ) answer_seqs.append (ANSWER_SEQ) file_object.close ()
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18-------------19 20 21 22 23 24 25 1 2-3 4 5 6, 7 23 24 25
def generate_trainig_data (self): Xy_data = [] Y_data = [] for i in range (len (question_seqs)): Question_seq = question_seqs[i] Answer_seq = answer_seqs[i] If Len (QUESTION_SEQ) < self . Max_seq_len and Len (answer_seq) < Self.max_seq_len:sequence_xy = [Np.zeros (Self.word_vec_dim)] * (sel F.max_seq_len-len (QUESTION_SEQ)) + list (reversed (QUESTION_SEQ)) sequence_y = Answer_seq + [Np.zeros (SELF.W
                Ord_vec_dim)] * (Self.max_seq_len-len (answer_seq)) Sequence_xy = Sequence_xy + sequence_y Sequence_y = [Np.ones (Self.word_vec_dim)] + sequence_y xy_data.append (sequence_xy) y_data. Append (sequence_y) return Np.array (Xy_data), Np.array (Y_data)
1 2 3 4 5 6 7 8 9 10 11 12 13 14 1 2 3 4 5 6 7 8 9 10 11 12 13-14 4. Chatbot Source Code Interpretation

Learning Resources:
Do-it-Yourself Chat Robot 38-The original chat robot is doing so

This article on the GitHub on the source code:

Refine the steps as follows:

2 of them. Prepare the data, 3. Building a model is the part that is highlighted above. Introducing package preparation data to establish model training forecast 1. Introducing Packages

Import sys
import math
import tflearn
import TensorFlow as TF from
tensorflow.python.ops import rnn_ Cell from
tensorflow.python.ops import rnn import
chardet
import numpy as NP
import struct
1 2 3 4 5 6 7 8 9 1 2 3 4 5 6 7 8-9 2. Preparing Data

def load_word_set ()
The 30 million corpus is divided into question and Answer parts to extract word.

Def load_word_set ():
    file_object = open ('./segment_result_lined.3000000.pair.less ', ' R ') while
    True:
        line = File_object.readline ()
        if line:
            Line_pair = line.split (' | ')
            Line_question = line_pair[0]
            line_answer = line_pair[1] for
            word in Line_question.decode (' Utf-8 '). Split ("): C8/>word_set[word] = 1 for
            word in Line_answer.decode (' Utf-8 '). Split ('):
                Word_set[word] = 1
        Else:
            break
    File_object.close ()
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15

def load_vectors (input)
Load word vector from Vectors.bin, return a word_vector_dict dictionary, key is word, value is 200-D vector.

def init_seq (Input_file)
The word vectors corresponding to the words in question and Answer are question_seqs in the word vector sequence, answer_seqs.

def init_seq (input_file): "" "Read the cut word text file, load all word sequences" "" File_object = Open (Input_file, ' r ') vocab_dict = {}
            While true:question_seq = [] Answer_seq = [] line = File_object.readline () if line:
            Line_pair = Line.split (' | ') Line_question = line_pair[0] Line_answer = line_pair[1] for word in Line_question.decode (' Utf-8 '). Split ('): if Word_vector_dict.has_key (word): Question_seq.append (Word_vector_dict[wo
                    RD]) for word in Line_answer.decode (' Utf-8 '). Split ('): if Word_vector_dict.has_key (word): Answer_seq.append (Word_vector_dict[word]) else:break question_seqs.append (q UESTION_SEQ) answer_seqs.append (ANSWER_SEQ) file_object.close ()
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23-24

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.