Learning Bayesian personalization sequencing (BPR) with TensorFlow

Source: Internet
Author: User

In the summary of Bayesian individualized sequencing (BPR) algorithm, we discuss the principle of Bayesian personalized sequencing (Bayesian personalized Ranking, hereinafter referred to as BPR), and we will use BPR to make a simple recommendation from the practical point of view. Since the existing mainstream open source class library has no BPR, and it is relatively simple, so with TensorFlow to implement a simple BPR algorithm, let us begin.

1. BPR Algorithm Review

BPR algorithm is a sort algorithm based on matrix decomposition, and its algorithm training set is a ternary group $<u,i,j>$, which indicates that the priority of commodity I is higher than commodity j for user U. Training results are two decomposed matrix $w$ and $h$, fake with m users, n items, then $w$ dimension is $m \times k$, $H $ dimension is $n \times k$. where k is a smaller dimension that needs to be defined by itself. For any user u, we can calculate that it has a sort score of $\overline{x}_{ui} = W_u \bullet h_i$ for the item I. The most important thing to find out about the sort score for all items is the real recommended set of user U.

2. Make BPR recommendation based on Movielens 100K

This article is based on Movielens 100K data to do the BPR recommendation example, data download link here. This data set has 943 users scoring 1682 movies. Because the BPR is a sort algorithm, the scores in the dataset will be ignored by us, mainly assuming that the user has seen a movie with a higher ranking score than the user's satisfaction. Since TensorFlow requires a batch gradient descent, we need to divide several batch training sets and test sets ourselves.

3. Algorithmic flow

Here we begin the process of the algorithm, referring to an older BPR code on GitHub, with deletions and enhancements.

The first is to load the class library and data, the code is as follows:

ImportNumPyImportTensorFlow as TFImportOSImportRandom fromCollectionsImportdefaultdictdefLoad_data (data_path): User_ratings=defaultdict (set) max_u_id=-1max_i_id=-1with open (Data_path,'R') as F: forLineinchf.readlines (): U, I, _, _= Line.split ("\ t") U=int (u) I=Int (i) user_ratings[u].add (i) max_u_id=max (U, max_u_id) max_i_id=Max (i, max_i_id)Print("max_u_id:", max_u_id)Print("max_i_id:", max_i_id)returnmax_u_id, max_i_id, user_ratings data_path= Os.path.join ('d:\\tmp\\ml-100k','U.data') User_count, Item_count, User_ratings= Load_data (Data_path)

The output is the number of users and the number of movies in the dataset. At the same time, the movies that each user has seen are saved in user_ratings.

max_u_id:943max_i_id:1682

Below we will be to each user u, in user_ratings randomly found his scored a movie I, saved in User_ratings_test, behind the construction of training sets and test set needs to be used.

def generate_test (user_ratings):     = dict ()    for in  user_ratings.items ():        = Random.sample (User_ratings[u], 1) [0]    return= generate_test (user_ratings)

Then we need to get a number of tensorflow iteration of the training set, the code to get the training set as follows, mainly based on user_ratings find a number of training with the ternary group $<u,i,j>$, for the random extraction of the user u,i can be from User_ Ratings is randomly drawn, and j is also randomly drawn from the total movie set, of course J must guarantee that $ (U,J) $ does not appear in user_ratings.

defGenerate_train_batch (User_ratings, User_ratings_test, Item_count, batch_size=512): T= []     forBinchRange (batch_size): U= Random.sample (User_ratings.keys (), 1) [0] I= Random.sample (User_ratings[u], 1) [0] whilei = =User_ratings_test[u]: I= Random.sample (User_ratings[u], 1) [0] J= Random.randint (1, Item_count) whileJinchUser_ratings[u]: J= Random.randint (1, Item_count) T.append ([U, I, j])returnNumpy.asarray (t)

The next step is to generate the test set ternary $<u,i,j>$. For each user u, it's the score movie I is we randomly extracted in user_ratings_test, its j is user u all not rated movie collection, such as user U has 1000 movies without scoring, then there are 1000 samples of the user's test set.

 def   Generate_test_batch (user_ratings, user _ratings_test, Item_count):  for  u in   User_ratings.keys (): t  = [] I = User_ratings_test[u]  for  J in  range (1, item_count+1 if  not  (J in   User_ratings[u]): T.append ([U, I, j])  yield  N Umpy.asarray (t) 

With the training set and the test set, the following is the data flow using TensorFlow to build the BPR algorithm, the code is as follows, where Hidden_dim is the hidden dimension k of our matrix decomposition. User_emb_w corresponds to Matrix $w$, item_emb_w corresponds to matrix $h$. If you have read the previous written BPR algorithm principle, the following loss of the structure of the function, I believe we will be very familiar.

defBPR_MF (User_count, Item_count, Hidden_dim): U=Tf.placeholder (Tf.int32, [None]) I=Tf.placeholder (Tf.int32, [None]) J=Tf.placeholder (Tf.int32, [None]) with Tf.device ("/cpu:0"): User_emb_w= Tf.get_variable ("User_emb_w", [user_count+1, Hidden_dim], initializer=tf.random_normal_initializer (0, 0.1)) Item_emb_w= Tf.get_variable ("Item_emb_w", [item_count+1, Hidden_dim], initializer=tf.random_normal_initializer (0, 0.1)) U_emb=tf.nn.embedding_lookup (User_emb_w, u) i_emb=Tf.nn.embedding_lookup (Item_emb_w, i) J_emb=Tf.nn.embedding_lookup (Item_emb_w, J)#MF predict:u_i > U_jx = Tf.reduce_sum (Tf.multiply (U_emb, (I_EMB-J_EMB)), 1, keep_dims=True)#AUC for one user:    #reasonable iff all (U,I,J) pairs is from the same user    #     #average AUC = mean (AUC for each user in test set)MF_AUC = Tf.reduce_mean (tf.to_float (x >0)) L2_norm=Tf.add_n ([Tf.reduce_sum (Tf.multiply (U_EMB, U_emb)), Tf.reduce_sum (Tf.multiply (I_EMB, I_EMB)), Tf.reduce_sum (Tf.multiply (J_EMB, J_EMB))] Regulation_rate= 0.0001Bprloss= Regulation_rate * L2_norm-Tf.reduce_mean (Tf.log (tf.sigmoid (x))) Train_op= Tf.train.GradientDescentOptimizer (0.01). Minimize (Bprloss)returnU, I, J, Mf_auc, Bprloss, Train_op

With the algorithm of the flow chart, training sets and test sets have also, now we can train the model to solve $w,h$ These two matrices, note that we are in the principle is to maximize the logarithmic posterior estimation function, and here is the smallest negative after the corresponding logarithmic posterior estimation function, the actual is the same. The code is as follows:

With TF. Graph (). As_default (), TF. Session () as Session:u, I, J, Mf_auc, Bprloss, Train_op= BPR_MF (User_count, Item_count, 20) Session.run (Tf.initialize_all_variables ()) forEpochinchRange (1, 4): _batch_bprloss=0 forKinchRange (1, 5000):#uniform samples from training setUij =Generate_train_batch (user_ratings, User_ratings_test, Item_count) _bprloss, _train_op=Session.run ([Bprloss, Train_op], feed_dict={u:uij[:,0], i:uij[:,1], j:uij[:,2]}) _batch_bprloss+=_bprlossPrint("Epoch:", Epoch)Print("Bpr_loss:", _batch_bprloss/k)Print("_train_op") User_count=0 _auc_sum= 0.0#Each batch would return only one user s AUC         forT_uijinchGenerate_test_batch (user_ratings, User_ratings_test, Item_count): _auc, _test_bprloss=Session.run ([Mf_auc, Bprloss], feed_dict={u:t_uij[:,0], i:t_uij[:,1], j:t_uij[:,2]}) User_count+ = 1_auc_sum+=_AUCPrint("Test_loss:", _test_bprloss,"TEST_AUC:", _auc_sum/user_count)Print("") Variable_names= [V.name forVinchtf.trainable_variables ()] Values=Session.run (variable_names) forKvinchzip (Variable_names, values):Print("Variable:", K)Print("Shape:", V.shape)Print(v)

Here I k take 20, iteration number 3, mainly for the fast output results. If we want to make a good BPR algorithm, we need to select the K value and iterate the number of iterations. Here my output is as follows, for reference.

epoch:1bpr_loss:0.7236263042427249_train_optest_loss:0.76150036 test_auc:0.4852939894020929epoch:2bpr_loss:0.7 229681559433149_train_optest_loss:0.76061743 test_auc:0.48528061393838007epoch:3bpr_loss:0.7223725006756341_ train_optest_loss:0.7597519 Test_auc:0.4852617720521252variable:user_emb_w:0shape: (944, 20) [[0.08105529 0.042706  28-0.12196594 ...  0.02729403 0.1556453-0.07148876] [0.0729574 0.01720054-0.08198593 ...  0.05565814-0.0372898 0.11935959] [0.03591165-0.11786834 0.04123168 ... 0.06533947 0.11889934-0.19697346] ... [ -0.05796075-0.00695129 0.07784595 ...-0.03869986 0.10723818 0.01293885]  [0.13237114-0.07055715-0.05505611 ...  0.16433473 0.04535925 0.0701588] [-0.2069717 0.04607181 0.07822093 ... 0.03704183 0.07326393 0.06110878]]variable:item_emb_w:0shape: (1683, 20) [[0.09130769-0.16516572 0.06490657 ... 0  .03657753-0.02265425 0.1437734] [0.02463264 0.13691436-0.01713235 ... 0.02811887 0.00262074  0.08854961] [0.00643777 0.02678963 0.04300125 ... 0.03529688-0.11161 0.11927075] ...  [0.05260892-0.03204868-0.06910443 ...  0.03732759-0.03459863-0.05798787] [ -0.07953933-0.10924194 0.11368059 ... 0.06346208-0.03269136-0.03078123] [0.03460099-0.10591184-0.1008586 ...-0.07162578 0.00252131 0.06791534]]

Now that we've got the $w,h$ matrix, we can sort the ratings of any user U. Note that the $w,h$ matrix of the output is in values[0] and values[1] respectively.

So how can we recommend it to a user? Here we take the first user as an example, it $w$ in the corresponding $w_0$ vector is value[0][0], then we can easily find out the user's predictions for all the movies, the code is as follows:

Session1 = tf. Session ()
U1_dim = Tf.expand_dims (Values[0][0], 0)
U1_all = Tf.matmul (U1_dim, Values[1],transpose_b=true)
result_1 = Session1.run (U1_all)
Print (result_1)

The output is a scoring vector:

[[ -0.01707731  0.06217583-0.01760234  ...] 0.067231    0.08989487  -0.05628442]]

Now recommend 5 movies to the first user, the code is as follows:

Print (" The following is a recommendation to user 0:"= numpy.squeeze (result_1) P[numpy.argsort (P) [:-5]] = 0  for inch Range (len (P)):     if p[index]! = 0        :print (index, P[index])

The output is as follows:

Here are 0 recommendations for users: 54 0.190727177 0.17746378828 0.171810251043 0.169892861113 0.17458326
4. Summary

The above is to use TensorFlow to build the BPR algorithm model, and use this algorithm model to do Movielens 100K recommended process. In the actual product project, if you want to use the BPR algorithm, one is to pay attention to the hidden dimension K of the assistant, as much as possible to iterate some number of rounds.

In addition we can be in the BPR loss function that piece of fuss. For example, we can make improvements to the $\overline{x}_{uij} = \overline{x}_{ui}-\overline{x}_{uj}$, plus a decay factor based on the scoring time, so that our sorting recommendations can also consider other factors such as time.

The above is to use TensorFlow to learn all the content of BPR.

(Welcome reprint, reproduced please indicate the source.) Welcome to communicate: [email protected])

Learning Bayesian personalization sequencing (BPR) with TensorFlow

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.