N-gram
The TF and IDF formulas here are the formulas used by TFIDF in Sklearn. And the original formula will have some discrepancy. And varies according to some parameters.
Explanation of the noun:
Corpus: Refers to the collection of all documents
Documents: The orderly arrangement of words. It can be an article, a sentence or something. Word frequency (TF)
In a given document, the word frequency (term FREQUENCY,TF) refers to how often a given term appears in the file. This number is normalized to the number of words (term count) to prevent it from favoring long files. (The same word may have a higher number of words in a long document than a short document, regardless of whether the word is important or not.) For the words T T in a particular file, the TFT tf_t can be expressed as:
Tfd,t=nd,t∑knd,k Tf_{d,t}=\frac{n_{d,t}}{\sum_k N_{d,k}}
Where Tfd,t tf_{d,t} represents the frequency with which the word T T appears in document D D. Nd,t N_{d,t} Indicates the number of occurrences of the word T T in document D D.
file Frequency (DF)
File frequency (document frequency, DF)
The DfT df_t represents the number of document containing the word T T. Reverse file frequency (IDF)
The inverse file frequency (inverse document FREQUENCY,IDF) is a measure of the universal importance of a word. IDF IDF for a specific term T t:
Idft=1+log| D|DFT Idf_t=1+\log \frac{| d|} {df_t}
Among them, | d| | D| is the total number of document in the corpus. Plus 1 is to keep the words that appear in all document completely ignored, i.e. idf≠0 IDF \NEQ 0
Sometimes in order to prevent the addition of 0, but also with a numerator denominator plus one formula, in the code as long as the other parameter smooth_idf=true. That is, suppose a document contains all the terms:
Idf