http://blog.csdn.net/chencheng126/article/details/50070021
Refer to this blogger's blog post.
principle1. The requirement of text similarity calculation begins with the search engine. The search engine needs to calculate the similarity between the "user query" and the many "pages" crawled down so that the most similar rows are returned to the user in the first place. 2, the main use of the algorithm is Tf-idftf:term frequency
Word frequencyIdf:inverse Document Frequency
Reverse Document FrequencyThe main idea is that if a word or phrase appears in an article with a high frequency and is seldom seen in other articles, the term or phrase is considered to have a good classification ability and is suitable for categorization. The first step: make each page text participle, become
Word Pack (bag of words)。 Step Three: Count the total number of pages (documents) m. Step three: Count the first page word n, calculate the number of times the first page appears in the page n, and then find out the number of times that the word appears in all documents. Then the TF-IDF of the word is: n/n * 1/(m/m) (There are other normalization formulas, here is the most basic and intuitive formula) fourth step: Repeat the third step to calculate the TF-IDF value of all the words in a Web page. Fifth step: Repeat step Fourth to calculate the TF-IDF value for each word on all pages. 3, processing user query The first step: the user query for Word segmentation. The second step: Calculate the TF-IDF value of each word in the user query according to the data of the Web page library (document). 4, the similarity calculation uses
Cosine similarity degreeTo calculate the angle between the user query and each page. The smaller the angle, the more similar.
1 #Coding=utf-82 3 4 #Import Warnings5 #warnings.filterwarnings (action= ' Ignore ', category=userwarning, module= ' Gensim ')6 ImportLogging7 fromGensimImportCorpora, models, similarities8 9DataPath ='D:/hellowxc/python/testres0519.txt'TenQuerypath ='D:/hellowxc/python/queryres0519.txt' OneStorepath ='D:/hellowxc/python/store0519.txt' A defsimilarity (DataPath, Querypath, Storepath): -Logging.basicconfig (format='% (asctime) s:% (levelname) s:% (message) s', level=logging.info) - the classMycorpus (object): - def __iter__(self): - forLineinchOpen (datapath): - yieldLine.split () + -Corp =Mycorpus () +Dictionary =Corpora. Dictionary (Corp) ACORPUS = [Dictionary.doc2bow (text) forTextinchCorp] at -TFIDF =models. Tfidfmodel (Corpus) - -CORPUS_TFIDF =Tfidf[corpus] - -Q_file = open (Querypath,'R') inquery =Q_file.readline () - q_file.close () toVec_bow =Dictionary.doc2bow (Query.split ()) +VEC_TFIDF =Tfidf[vec_bow] - theindex =similarities. Matrixsimilarity (CORPUS_TFIDF) *Sims =INDEX[VEC_TFIDF] $ Panax NotoginsengSimilarity =list (Sims) - theSim_file = open (Storepath,'W') + forIinchSimilarity: ASim_file.write (str (i) +'\ n') the sim_file.close () +Similarity (DataPath, Querypath, Storepath)
Stick to my test code.
My test file Querypath is a problem, datapath is a variety of answers to this question, I try to analyze the problem by text similarity and which answer best match.
The original blog test is Querypath is a product description, DataPath is a product of the review, through the text similarity analysis, the product description and the actual product whether the difference is too large.
Stick to my test data. The small data is to test this:
Note that all the data has been word processing, participle how to handle, you can use the Python jieba library word processing. can refer to http://www.cnblogs.com/weedboy/p/6854324.html
Query
Data
Store (i.e. result)
The test results and problems are actually the best match.
Summarize:
1.gensim, in addition to providing the TF-IDF algorithm, make good use
2. I use Jieba participle forgot to delete the stop words, to bring a great impact on the results, the Jieba library has a function to delete the word
3. In question and answer systems, it is not possible to match questions and answers, without supervised machine learning.
Python uses Gensim for text similarity analysis