A simple and easy to learn algorithm for depth learning--wide & Deep Learning_wide-deep

Source: Internet
Author: User

This article is a summary of reading the Wide & Deep Learning for Recommender Systems, which presents a combination of the Wide model and the DEEP model for the Promotion recommendation System (recommendation System) has a very important effect on performance. 1. Background

This paper presents the wide & Deep model, which aims at simultaneously acquiring the memory (memorization) and generalization (generalization) capability of the trained model: Memory (memorization) is the correlation between item or feature found in historical data. Generalization (generalization), the transfer of dependencies, discovers a new feature combination that has little or no appearance in historical data.

In the recommendation system, the memory embodies the accuracy, and generalization embodies the novelty.

In this paper, the wide & deep model is used to make the trained models have both of these characteristics. 2, Wide & Deep Model 2.1, Wide & Deep model structure

The structure of the Wide & deep model is shown in the following illustration:



The wide & Deep model includes two sections, the wide and deep sections, the wide part as shown in the left image above, as shown in the right-hand figure in the previous figure. 2.2. Wide Model

The wide model, as shown in the diagram on the left of the previous figure, is in fact the wide model is a generalized linear model:

Y=wtx+b y = w T x + b

wherein, the characteristic x=[x1,x2,⋯,xd] x = [x 1, x 2, ⋯, X D] is a vector of D D-dimensional, w=[w1,w2,⋯,wd] W = [w 1, W 2, ⋯, W d] is the parameter of the model. Finally, the sigmoid function is added as the final output on the basis of Y Y. 2.3. Deep model

The deep model, as shown in the diagram on the right of the previous figure, is in fact a feedforward neural network with the deep model. The depth neural network model usually requires the input to be a continuous dense feature, for sparse, high-dimensional class characteristics, usually first converted to a low dimensional vector, this process also becomes embedding.

In training, first randomly initialize the embedding vector, and in the training process of the model gradually modify the value of the vector, the vector as a parameter to participate in the training model.

The hidden layer is calculated by:

A (l+1) =f (W (l) A (L) +b (L)) A (l + 1) = f (W (l) A (L) + B (l))

Where F f is called an activation function, such as Relus. 2.4. Joint training of Wide & Deep Model (joint training)

Joint training is the simultaneous training of the wide model and the deep model, and the weighted sum of the results of the two models as the final prediction result:

P (y=1∣x) =σ (Wtwide[x,ϕ (x)]+wtdeepa (LF) +b) p (Y = 1∣x) =σ (w w i d e T [x, Φ (x)] + W d e E p t A (l f) + B)

Training method: Wide Model: Ftrl Deep Model: Adagrad 3, Apps recommendation system

This paper applies the above wide & deep model to the apps recommendation of Google Play. 3.1. Recommendation system

For recommendation systems, the most common structure is shown in the following illustration:



When a user accesses the App store, a request is generated, and the referral system is returned to the recommended apps list after the request arrives at the referral system.

In the actual recommendation system, the recommended process is usually divided into two parts, that is, the retrieval and ranking,retrieval in the above figure are responsible for retrieving some of the user-related apps,ranking from the database to rate the apps of these retrieved, and finally, Returns the corresponding list to the user according to the score level. 3.2, the characteristics of apps recommendation

Before training the model, the most important work is the preparation of the training data and the selection of features, in the apps recommendation, the data that can be used include user and exposure data. Therefore, each sample corresponds to a single exposure data, while the sample's label is 1 for installation and 0 is not installed.

For a class feature, it is mapped to a vector by a dictionary (vocabularies) and normalized to an interval [0,1] [0, 1] for continuous real-property characteristics.



3.3, the standard of measurement

There are two metric indicators, respectively, for online measurement and off-line measurement, when online, through A/b test, the final utilization of the installation rate (acquisition), off-line use of AUC as an indicator of the evaluation model. References Cheng H T, Koc L, Harmsen J, et al. Wide & Deep Learning for Recommender Systems[j]. 2016:7-10. Wide & Deep and Deep & Cross and TensorFlow implementation of wide & Deep official depth Learning in CTR application wide & Deep Learning For Recommender Systems notes deep Learning in American group comments the application of the ranking of recommended platforms TensorFlow linear model and wide deep learning

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.