Although the algorithm that uses the number of tag tags of a user * as the product is simple, it may lead to hot item recommendation. The weight of an item tag is the number of times that the item has been tagged. The weight of a user tag is the number of times that the user has used the tag, which leads to a reduction in Personalized recommendations and hot recommendations.
The TF-IDF can be used to improve the algorithm. Term frequemcy-inverse fetch net frequency is a weighted technique used for information retrieval and Text Mining. Used to evaluate the importance of a word. The main idea is that if a word or phrase appears frequently in an article and rarely appears in other articles, the word or phrase has good classification ability, suitable for classification. IDF is the frequency of reverse files, that is, the smaller the number of files containing a term, the larger the IDF.
IDF can divide the total number of files by the number of files containing the word, and obtain the logarithm:
D indicates the total number of files. The denominator indicates the number of files containing the word. To avoid the denominator being 0, 1 + denominator is usually used as the current denominator. In this way, when a file containing the word occupies a small proportion of the total number of files, a large TDF can be obtained, thus obtaining a large proportion, which is conducive to personalized recommendations. (However, the introduced TDF simply highlights the weight of Small-frequency words, which may adversely affect the results.)
The TF-TDF = TF * TDF reflects the importance of a word to the entire document set.
Applying the TF-IDF to the algorithm of label-based recommendation system can be improved as follows:
N (B) indicates how many different users have used Label B.
Similarly, using N (I) to indicate how many different users have tagged item I, can reduce the weight of popular items, so as to effectively avoid the impact of hot items.
Recommendation System Learning (2) -- Improvement Based on TF-IDF