TF–IDF algorithm Interpretation
TF–IDF, an abbreviation for term frequency–inverse document frequency , is often used to measure how important a word is to the document it is in in a corpus, Commonly used in information retrieval and text mining.
A natural idea is that the higher the morphemes in a document, the more important it is to the document, but at the same time, if the word appears in a very large number of documents, it may be a very common word, with little information that contributes little to the document, such as ' the ' stop word. So to synthesize the number of occurrences of a word in a document and how many documents contain the word, it is important for the document to be present if a word appears in many times in the document and there are very few documents in the whole corpus that contain the word. The word frequency (TF) in the document is multiplied by the inverse of the number of documents containing the word (IDF), which is consistent with the idea.
Defined:
TF: The simplest choice is the number of occurrences of a word in a document, for example, using TF (t,D) to indicate the number of times the word T appears in document D
IDF:IDF measured how much information a word provides, and if a word appears in every document throughout the corpus, the word basically does not provide any information, for example ' the word ' is almost always in any text, and IDF usually takes a logarithmic calculation,
where n represents the total number of documents, and the denominator represents the number of Word t contained in the corpus
You can then get the TF-IDF value of morphemes T in Corpus D of the D document:
The next article is the implementation of Python code
TF–IDF algorithm interpretation and implementation of Python code (on)