Basic knowledge of the second

Source: Internet
Author: User
Tags modulus idf

Directory:

Calculates the similarity between two strings

II. application of TF-IDF and cosine similarity (II.): Finding similar articles

Calculates the similarity between two strings

This article is reproduced from Cscmaker

(1) cosine similarity

The similarity between the two vectors is measured by measuring the cosine of the corners between them. The cosine of the 0-degree angle is 1, and the cosine of any other angle is no greater than 1, and its minimum value is-1. Thus the cosine of the angle between the two vectors determines whether the two vectors are roughly pointing in the same direction. Therefore, it is commonly used for file comparisons.

See Wikipedia introduction (Click to open the link)

(2) The number of occurrences of the term is used as the value of the vector space (IDF---Inverse document frequency) in the implementation of the algorithm.

Import Java.util.HashMap;
Import Java.util.Iterator;
Import Java.util.Map;

public class Similardegreebycos
{
/*
* Calculates the similarity of two strings (English characters), simple cosine calculation, no weights added
*/
public static double Getsimilardegree (string str1, String str2)
{
Create a vector space model, using the map implementation, the primary key is a word term, the value is an array of length 2, and holds the number of occurrences of the corresponding term in the string.
map<string, int[]> vectorspace = new hashmap<string, int[]> ();
int[] Itemcountarray = null;//in order to avoid frequent local variables, the Itemcountarray is declared here

Break a string with a space separator
String strarray[] = Str1.split ("");
for (int i=0; i<strarray.length; ++i)
{
if (Vectorspace.containskey (Strarray[i]))
+ + (Vectorspace.get (strarray[i]) [0]);
Else
{
Itemcountarray = new Int[2];
Itemcountarray[0] = 1;
ITEMCOUNTARRAY[1] = 0;
Vectorspace.put (Strarray[i], itemcountarray);
}
}

Strarray = Str2.split ("");
for (int i=0; i<strarray.length; ++i)
{
if (Vectorspace.containskey (Strarray[i]))
+ + (Vectorspace.get (Strarray[i]) [1]);
Else
{
Itemcountarray = new Int[2];
Itemcountarray[0] = 0;
ITEMCOUNTARRAY[1] = 1;
Vectorspace.put (Strarray[i], itemcountarray);
}
}

Calculate Similarity degree
Double Vector1modulo = 0.00;//The modulus of the vector 1
Double Vector2modulo = 0.00;//The modulus of the vector 2
Double vectorproduct = 0.00; Vector product
Iterator iter = Vectorspace.entryset (). Iterator ();

while (Iter.hasnext ())
{
Map.entry Entry = (map.entry) iter.next ();
Itemcountarray = (int[]) entry.getvalue ();

Vector1modulo + = itemcountarray[0]*itemcountarray[0];
Vector2modulo + = itemcountarray[1]*itemcountarray[1];

Vectorproduct + = itemcountarray[0]*itemcountarray[1];
}

Vector1modulo = Math.sqrt (Vector1modulo);
Vector2modulo = Math.sqrt (Vector2modulo);

Returns the degree of similarity
Return (vectorproduct/(Vector1modulo*vector2modulo));
}

/*
*
*/
public static void Main (String args[])
{
String str1 = "Gold Silver Truck";
String str2 = "Shipment of gold damaged in a fire";
String STR3 = "Delivery of silver arrived in a silver truck";
String STR4 = "Shipment of gold arrived in a truck";
String STR5 = "Gold gold gold and gold gold";

System.out.println (Similardegreebycos.getsimilardegree (str1, str2));
System.out.println (Similardegreebycos.getsimilardegree (str1, STR3));
System.out.println (Similardegreebycos.getsimilardegree (str1, STR4));
System.out.println (Similardegreebycos.getsimilardegree (str1, STR5));
}
}

II. application of TF-IDF and cosine similarity (II.): Finding similar articles

This paragraph is reproduced from Ruan Yi Feng

Last time, I used the TF-IDF algorithm to automatically extract keywords.

The main idea of TFIDF is that if a word or phrase appears in an article with a high frequency of TF and is seldom seen in other articles, it is considered to be a good category-distinguishing ability and suitable for classification. TFIDF is actually: TF * IDF,TF Word frequency (term Frequency), IDF reverse file frequencies (inverse document Frequency). TF represents the frequency at which the entry appears in document D. The main idea of IDF is that if the fewer documents that contain the entry T, that is, the smaller the n, the larger the IDF, the better the class-distinguishing ability of the term T.

Today, let's look at another related issue. Sometimes, in addition to finding keywords, we also want to find other articles similar to the original article. For example, "Google News" under the main news, also provides a number of similar news.

In order to find similar articles, "Cosine similarity" (cosine similiarity) is needed. Let me give you an example of what "cosine similarity" is.

For the sake of simplicity, let's start with the sentences.

Sentence A: I like watching TV and don't like watching movies.

Sentence B: I don't like watching TV, and I don't like watching movies.

How can I calculate the similarity of the above two sentences?

The basic idea is that if the two words are more similar in terms, their content should be more similar. Therefore, we can start with the word frequency and calculate their similarity.

The first step, participle.

Sentence A: I/like/watch/TV, no/Like/watch/movie.

Sentence B: I/don't/like/watch/TV, also/no/like/watch/movie.

The second step is to list all the words.

I, like, watch, TV, movie, No, also.

The third step is to calculate the word frequency.

Sentence A: I am 1, like 2, see 2, TV 1, movie 1, not 1, also 0.

Sentence B: I am 1, like 2, see 2, TV 1, movie 1, not 2, also 1.

Fourth step, write the word frequency vector.

Sentence a:[1, 2, 2, 1, 1, 1, 0]

Sentence b:[1, 2, 2, 1, 1, 2, 1]

Here, the question becomes how to calculate the similarity between the two vectors.

We can think of them as two line segments in space, all from the origin ([0, 0, ...] ), pointing in a different direction. An angle is formed between the two segments, if the angle is 0 degrees, which means that the direction is the same, the line is coincident, and if the angle is 90 degrees, it means a right angle, a completely different direction, and if the angle is 180 degrees, it means that the direction is opposite. Therefore, we can judge the similarity of vectors by the size of the angle. The smaller the angle, the more similar the representation.

For the two-dimensional space, A and B are two vectors, and we want to calculate their angle θ. The cosine theorem tells us that you can use the following formula to calculate:

Assuming that the a vector is [x1, the y1],b vector is [x2, y2], then the cosine theorem can be rewritten in the following form:

Mathematicians have shown that this method of calculation of cosine is also true for n-dimensional vectors. Suppose A and B are two n-dimensional vectors, A is [A1, A2, ..., an], B is [B1, B2, ..., Bn], and the cosine of the angle θ of a and B is equal to:

Using this formula, we can get the cosine of the angle between sentence a and sentence B.

The closer the cosine is to 1, the closer the angle is to 0 degrees, that is, the more similar the two vectors, which is called "Cosine similarity." So, the above sentence A and sentence B are very similar, in fact they have an angle of about 20.3 degrees.

As a result, we get an algorithm for "finding similar articles":

(1) Using TF-IDF algorithm, find out the keywords of two articles;

(2) Each article to take out several keywords (such as 20), combined into a set, calculate each article for the word frequency of the set (in order to avoid the differences in the length of the article, you can use relative frequency);

(3) Generate two articles of the respective word frequency vector;

(4) Calculates the cosine similarity of two vectors, the greater the value, the more similar the representation.

"Cosine similarity" is a very useful algorithm that can be used as long as it calculates the similarity of two vectors.

Next time, I want to talk about how to automatically generate a summary of an article on the basis of word frequency statistics.

Finish

Basic knowledge of the second

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.