The algorithm and application of machine learning and neural network based on Apache Spark

Source: Internet
Author: User

Discovering and exploring data using advanced analytic algorithms such as large-scale machine learning, graphical analysis, statistical modelling, and so on is a popular idea, and in the IDF16 technology class, Intel software Development Engineer Wang Yiheng shares the course on machine learning and neural network algorithms and applications based on Apache Spark. This paper introduces the practical application of large scale distributed machine learning in fraud detection, user behavior prediction (sparse logistic regression), and some support or optimization work of Intel on LDA, Word2vec, CNN, sparse Kmeans and parameter server.

There are many current machine learning/deep learning libraries, with spark supporting distributed machine learning and deep neural networks, mainly based on two considerations: the unity of the big data platform. Because with the Spark feature, the analytics team is increasingly interested in using spark as the big data platform, and machine learning/deep learning is inseparable from big data. Other frameworks (mainly deep learning frameworks, such as Caffe) are not good for multi-machine parallel support.

In an end-to-end big data solution for a top-tier payment company, Intel developed Standardizer, WOE, neural network models, estimator, Bagging utility, and so on, and ML pipelines are also improved by Intel.

Sparse logistic regression mainly solves the problem of network and memory bottleneck, because large-scale learning, the weight of each iteration broadcast to each worker and the gradient sent by each task are double-precision vectors, very large. Intel leverages Data sparsity, uses advanced encoding to cache data (using sparse format caching), compresses traffic data, and optimizes binary values for processing, with the resulting gradient being a sparse vector.

A large-scale theme model based on the Apache Spark is under development (https://github.com/intel-analytics/TopicModeling).

Distributed neural networks on spark, driver broadcast weights and deviations to each worker, which is similar to sparse logistic regression, Intel integrates the neural network with the optimized Intel Math core function library (support Intel architecture acceleration).

The work of the parametric server for spark, including the data model, supported operations, synchronization model, fault tolerance, integrated GRAPHX, and so on, with variable parameters as a complement to the system, to achieve better performance and fault tolerance, equivalent to the two architecture is only a system integration (yarn). Because of the complexity of the model parallelism, the Intel team has not considered the parallel work of the model at the moment.

Full download of presentation ppt

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.