The Mission's vision is to connect consumers and businesses, and search plays a very important role. With the development of the business, the number of businesses and group buying in the United States is growing rapidly. In this context, the importance of search ranking is more prominent: ranking optimization can help users more easily find to meet their needs of businesses and buy, improve user experience, improve the transformation effect.
Compared with the traditional web search problem, the search order of the United States has its own characteristics--90% transactions occur at the mobile end. On the one hand, it puts forward a higher requirement for the individualization of the sort, for example, in the "Hot pot" query, the Beijing five junction of the Hot pot shop A, in the five crossing of the user U1 is a good result, for users in Wangjing U2 is not necessarily a good result; on the other hand, we accumulated the user on the client's rich and accurate behavior, Through the analysis of the user's geographical location, category and price preferences, and thus guide personalized sorting.
In view of the O2O business characteristics of the United States, we have implemented a set of search-ranking technology schemes, which are scores compared with the order of rules. Based on this program, we also abstract a set of common O2O sorting solution, just 1-2 days can be quickly deployed to other products and sub industries, at present in hot words, suggestion, hotels, KTV and other products and sub-industry applications.
We will be on line and in the next two parts of this general O2O sorting solution, this article is online, mainly introduced on-line service framework, feature loading, online prediction modules, the next chapter will focus on the off-line process. sorting system
In order to quickly and efficiently perform the iterative search algorithm, the sequencing system is designed to support a flexible A/b test to meet the needs of accurate effect tracking.
The American group search sorting system, as shown above, mainly includes three modules: off-line data processing, on-line service and online data processing. Offline data processing
The hdfs/hive stores a log of search displays, clicks, orders, and payments. The offline data flow schedules multiple map reduce task analysis logs on a daily basis, and related tasks include:
Offline feature Mining
Output deal (group purchase order)/poi (merchant), user, and query dimension features are used for the sort model.
Data Cleaning labeling & model Training
Data cleaning removes spiders, cheats and other imported dirty data, and the cleaned data is labeled as model training.
Effect report generation
Statistic generation Algorithm effect index, guide sorting improvement.
The input of the feature as the sorting model is the basis of the sorting system. Abnormal changes in features can directly affect the effect of sorting. Feature monitoring mainly monitors feature coverage and value distribution, and helps us to find related problems in time. Online data processing
and off-line process, the online process through the Storm/spark streaming tools for real-time log flow analysis and processing, output real-time features, real-time reports and monitoring data, update the online sorting model. Online Services (Rank service)
When the rank service receives a search request, it invokes the recall service to obtain a candidate poi/deal set, which is configured to assign the sorting policy/model to the user according to A/b test configuration, and the policy/model is used to sort the candidate set.
The following figure is the sorting process within the rank service.
L1 coarse granularity sorting (fast)
Use fewer features, simple models, or rules to sort the candidate sets in coarse-grained order.
L2 Fine-grained sorting (slower)
Fine-grained sorting of the top N of the L1 sort result. This layer loads the feature from the feature library (through Featureloader), and the application model (A/b test configuration Assignment) is sorted.
L3 Business Rules Intervention
On the basis of L2 sorting, it applies the business Rules/manual intervention to adjust the order properly.
The Rank service will record the display log to the log collection system for online/offline processing. A/b Test
The traffic segmentation for A/b test is done on the rank server side. We cut traffic into buckets (Bucket) according to the UUID (user ID), each bucket corresponds to a sort policy, and the flow in the bucket is sorted using the corresponding policy. The use of UUID for traffic segmentation is to ensure the consistency of the user experience.
The following is a simple example of a/b test configuration.
] Beginbucket ": 0,
" endbucket ":"
WhiteList ": ,
" strategy ":" Algo-1 "
" Beginbucket ":" Endbucket ":", "
For an illegal UUID, each request is randomly assigned a bucket to ensure that the effect contrast is unaffected. The whitelist (white list) mechanism ensures that users are configured to use a given policy to assist related tests.
In addition to A/b test, we also applied the interleaving method to compare two sorting algorithms. Compared to A/b test, the Interleaving method is more sensitive to the sorting algorithm  and can compare the pros and cons of the two sorting algorithms with fewer samples. The Interleaving method uses smaller traffic to help us quickly eliminate poor algorithms and improve policy iteration efficiency. Feature loading
Search sort services involve many types of features, and feature acquisition and calculation are the bottleneck of rank service response speed. We have designed the Featureloader module, according to the characteristic dependence relation, obtains and calculates the characteristic in parallel, effectively reduces the characteristic loading time. In real business, the average response time of parallel feature loading is about 20 milliseconds faster than that of serial feature loading.
In the implementation of Featureloader we use the akka. As shown in the above illustration, the abstraction and encapsulation of feature acquisition and computation is for several Akka actor, which are dispatched by Akka and executed in parallel. Features and Models
Since September 2013, the United States has been using machine learning methods (Learning to rank) in search sequencing, and has made significant gains. This benefits from accurate data tagging: the user clicks on a single payment and other behaviors can effectively reflect their preferences. Through the two aspects of feature mining and model optimization, we continue to optimize search ordering. Here are some of our work on feature usage, data tagging, sorting algorithms, Position bias processing, and cold start problem mitigation. Characteristics
From the American business, feature selection focuses on four dimensions of user, Query, Deal/poi, and search contexts.
This includes the category preference, consumption level and geographical location of the excavation.
This includes query length, historical clicks, conversion rates and types (Merchant/category/landmark).
Including Deal/poi sales, price, evaluation, discount rate, category and historical conversion rate.
Including time, search entrance and so on.
In addition, some characteristics are derived from the interrelationship between several dimensions: the user's deal/poi of clicks and orders, and the distance between users and poi are important factors in determining the ranking; the textual relevance and semantic relevance of query and Deal/poi are key features of the model. Model
In Learning to rank application, we mainly use the Pointwise method. The user's clicks, orders, and payments are used to mark the positive samples. From the statistical point of view, clicks, the order and the payment and so on the behavior respectively corresponding to this sample to the user demand different match degree, therefore the corresponding sample will be treated as the positive sample, and assigns the increasing weight.
There are many different types of models running on the line, mainly including:
Gradient boosting decision/regression tree (GBDT/GBRT) 
GBDT is a non-linear model which is used in ltr. We have developed a GBDT tool based on Spark, which uses parallel methods to shorten the training time when the tree fits the gradient. The GBDT tree is designed as a three-fork tree, as a way to deal with feature deficiencies.
By choosing different loss functions, the boosting tree method can deal with regression and classification problems. In the application, we choose the more effective logistic likelihood loss, and model the problem as two classification problem.
Logistic regression (LR)
Referring to Facebook's paper, we use GBDT to build some LR features. Using ftrl algorithm to train LR model on line.
The evaluation of the model is divided into two parts: off-line and online. Offline part We evaluate the model through the AUC (area Under the ROC Curve) and map (Mean Average Precision), and then test the actual effect of the model by A/B testing, which supports the iterative optimization of the algorithm. Cold start
In our search sequencing system, the cold start problem  is shown to be that when new businesses, new buy-list entries, or new users use the US group, we don't have enough data to speculate about the user's preference for the product. Business cold start is the main problem, we through two means to ease. On the one hand, text relevance, category similarity, distance and category attributes are introduced in the model to ensure accurate prediction without sufficient display and feedback; On the other hand, we introduce a explore&exploit mechanism to give a modest exposure to new businesses and groups, To collect feedback data and improve forecasts. Position Bias
In the mobile phone, the results of the display form is a list page, the results of the display location will have a great impact on user behavior. In the feature mining and training data tagging, we consider the deviation of the introduction of the display position factor. For example, in the statistics of CTR (click-through-rate), we are based on examination model to remove the impact of the display position. Summarize
This article mainly introduces the structure, algorithm and main module of the online part of the American search sorting system. In subsequent articles, we will focus on the work of the offline part of the sorting system.
A perfect online system is the basis for the continuous sequencing optimization. The continuous mining of data and models based on business is the driving force for the continuous improvement of order. We are still exploring. The reference document Learning to Rank. Wikipedia Friedman, J. H. (2001). Greedy function approximation:a gradient boosting machine. Annals of Statistics, 1189-1232. He, X., Pan, J., Jin, O., Xu, T., Liu, B., Xu, T., .... & Candela, J. Q. (2014, August). Practical lessons from predicting clicks to ads at Facebook. In Proceedings of 20th ACM SIGKDD Conference on Knowledge Discovery and Data Mining (pp. 1-9). ALMA McMahan, H. B., Holt, G., Sculley, D., Young, M., Ebner, D., Grady, J., ... & Kubica, J. (2013, August). Ad Click Prediction:a View from the trenches. In Proceedings of the 19th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining (pp. 1222-1230). ALMA Craswell, N., Zoeter, O., Taylor, M., & Ramsey, B. (2008, February). An experimental comparison of Click Position-bias models. In Proceedings of the 2008 International Conference on Web Search and Data Mining (pp. 87-94). ALMA Cold Start. Wikipedia ChApelle, O., Joachims, T., Radlinski, F., & Yue, Y. (2012). Large-scale validation and analysis of interleaved search evaluation. ACM Transactions on Information Systems (Tois), 30 (1), 6. Akka: http://akka.io Radlinski, F., & Craswell, N. (July). Comparing the sensitivity of information retrieval metrics. In Proceedings of the 33rd International ACM Sigir Conference on A and Development in Information Retrieval (pp. 66 7-674)-ACM. From: http://tech.meituan.com/meituan-search-rank.html