Discovering and exploring data using advanced analytic algorithms such as large-scale machine learning, graphical analysis, statistical modelling, and so on is a popular idea, and in the IDF16 technology class, Intel software Development Engineer Wang Yiheng shares the course on machine learning and neural network algorithms and applications based on Apache Spark. This paper introduces the practical application of large scale distributed machine learning in fraud detection, user behavior prediction (sparse logistic regression), and some support or optimization work of Intel on LDA, Word2vec, CNN, sparse Kmeans and parameter server.
There are many current machine learning/deep learning libraries, with spark supporting distributed machine learning and deep neural networks, mainly based on two considerations: the unity of the big data platform. Because with the Spark feature, the analytics team is increasingly interested in using spark as the big data platform, and machine learning/deep learning is inseparable from big data. Other frameworks (mainly deep learning frameworks, such as Caffe) are not good for multi-machine parallel support.
In an end-to-end big data solution for a top-tier payment company, Intel developed Standardizer, WOE, neural network models, estimator, Bagging utility, and so on, and ML pipelines are also improved by Intel.
Sparse logistic regression mainly solves the problem of network and memory bottleneck, because large-scale learning, the weight of each iteration broadcast to each worker and the gradient sent by each task are double-precision vectors, very large. Intel leverages Data sparsity, uses advanced encoding to cache data (using sparse format caching), compresses traffic data, and optimizes binary values for processing, with the resulting gradient being a sparse vector.
A large-scale theme model based on the Apache Spark is under development (https://github.com/intel-analytics/TopicModeling).
Distributed neural networks on spark, driver broadcast weights and deviations to each worker, which is similar to sparse logistic regression, Intel integrates the neural network with the optimized Intel Math core function library (support Intel architecture acceleration).
The work of the parametric server for spark, including the data model, supported operations, synchronization model, fault tolerance, integrated GRAPHX, and so on, with variable parameters as a complement to the system, to achieve better performance and fault tolerance, equivalent to the two architecture is only a system integration (yarn). Because of the complexity of the model parallelism, the Intel team has not considered the parallel work of the model at the moment.
Full download of presentation ppt