TF-IDF (Term Frequency-inverse Document Frequency) is a commonly used weighted technique for information retrieval and information exploration. TF-IDF is a statistical method used to assess the importance of a word to a document in a collection or corpus. The importance of a word increases in proportion to the number of times it appears in the file, but it also decreases proportionally with the frequency of its appearance in the corpus. Various forms of TF-IDF weighting are often used by search engines as measurements or ratings of Relevance between files and user queries. In addition to TF-IDF, search engines on the Internet also use a link-based analysis-based rating method to determine the order in which files appear in the search results.
The main idea of TFIDF is: if a word or phrase appears frequently in an article and rarely appears in other articles, this word or phrase is considered to have good classification ability and is suitable for classification. TFIDF is actually: TF * IDF, term frequency (term frequency), frequency (inverse Document Frequency) of IDF anti-document ). TF indicates the entry, the frequency of occurrence in document D. The main idea of IDF is: if there are fewer documents containing entry T, that is, the smaller the value of N, the larger the IDF, it indicates that entry T has good classification ability. If a class C. the number of documents containing the entry T is m, and the total number of documents containing T in other classes is K. Obviously, the number of documents containing T is n = m + K. When GFL is large, N is also large, and the IDF value obtained according to the IDF formula is small, it indicates that the T-type differentiation capability of the entry is not strong. However, if an entry frequently appears in a class document, it indicates that the entry can represent the characteristics of the text of the class. Such entries should be given a higher weight, it is also selected as the Feature Word of the text to distinguish it from other documents. This is the deficiency of IDF.
Principle
In a given file, term frequency (TF) refers to the number of times a given word appears in the file. This number is often normalized to prevent it from being biased towards long files. (A word may have a higher word frequency than a short file in a long file, regardless of whether the word is important or not .)
Inverse Document Frequency (IDF) is a measure of the general importance of words. The IDF of a specific word can be obtained by dividing the total number of files by the number of files containing the word.
The frequency of high words in a specific file and the low file frequency of the word in the entire file set can produce a high-weight TF-IDF. Therefore, TF-IDF tends to filter out common words and retain important words.
Example
There are many different mathematical formulas that can be used to calculate TF-IDF. Word Frequency (TF) is the number of times a word appears divided by the total number of words in the file. If the total number of words in a document is 100, and the word "cow" appears three times, the word "cow" is 0.03 (3/100) in the document ). The frequency of a computing file (DF) is used to determine how many files contain the word "cow" and then divide it by the total number of files contained in the file set. Therefore, if the word "cow" appears in 1,000 documents and the total number of files is 10,000,000, the frequency of the files is 0.0001 (1000/10, 000,000 ). Finally, the TF-IDF score can be calculated by dividing the word frequency by the file frequency. In the preceding example, the TF-IDF score of the word "cow" in the file set is 300 (0.03/0.0001 ). Another form of this formula is to take the file frequency to the logarithm.
Application in vector space model
The TF-IDF weight calculation method is often used together with the cosine similarity (Cosine similarity) in the vector space model to determine the similarity between the two files.