NIPS 2016 article: Intel China Research Institute on Neural Network compression algorithm of the latest achievements

Source: Internet
Author: User
Tags dashed line

NIPS 2016 article: Intel China Research Institute on Neural Network compression algorithm of the latest achievements

Http://www.leiphone.com/news/201609/OzDFhW8CX4YWt369.html

Intel China Research Institute's latest achievement in the field of deep learning--"dynamic surgery" algorithm 2016-09-05 11:33 reproduced pink Bear 0 reviews

Lei Feng Net press: This article is the latest research results of Intel China Research Institute, mainly introduces a "dynamic surgery" algorithm, which effectively solves the problem of long training time and high error-pruning rate when dealing with large-scale networks. Using this algorithm, we can easily compress the total parameters of the two classic networks of Lenet and alexnet by 108 times times and 17.7 times times respectively.

The Intel China Research Institute recently proposed a neural network compression algorithm called "Dynamic surgical Operation", which gained extensive attention in the industry and highly appraised by experts from home and abroad. Using this method, the researchers can greatly compress the network structure under the premise that the performance of the original model is not reduced. Let us take you to understand this algorithm in a comprehensible way.

If you have the habit of regularly focusing on it and Internet news, you will not be unfamiliar with the term deep learning (learning). The concept of deep learning stems from the early study of artificial neural networks, the core of which is the "learning" of abstract representations of sample data through deep neural networks (neural networks). The concept of deep learning since the 2006 was first proposed, and today has dramatically changed the ecology of AI and the entire internet, it landscape. After more than 10 years of development, the technology has shown top performance in many fields including face recognition, speech recognition, object detection and natural language processing.

One of the mainstream deep networks: deep convolutional neural networks

(Image source: A tutorial on deep learning [3])

However, deep learning distance "Invincible" also has a certain distance. One of the main bottlenecks restricting its development is the contradiction between the extremely high model complexity of deep networks and the very limited hardware resources of consumer-grade electronic products. in fact, many of the mainstream depth networks now contain tens and even billions of learning parameters, and such a large number of parameters bring considerable pressure to the storage and computation of the model. Therefore, how to compress the well-trained deep network becomes a difficult problem for researchers. 2015 's paper learning both weights and connections for efficient neural networks a network pruning (NET pruning) algorithm is proposed, It is possible to compress more than 10 times times the learning parameters in the depth network under the precondition of maintaining the data representation ability , which has aroused extensive discussion in the academic circle. The paper is also published in the field of machine learning, the top international conference on Neural Information Processing Systems Conference (Conference on Neural Information processing systems, hereinafter referred to as "NIPS"), has gained great influence.

Neural Network pruning strategy

(Image source: Learning both weights and connections for efficient neural networks [1])

This year, three researchers at the Intel China Research Institute Cognitive Computing Laboratory, Guo Yiwen, Liao Anbong and Chen Yurong, have made breakthroughs in the field of deep learning. The dynamic Network surgery algorithm proposed by them is very effective in solving the problem of long training time and high error-pruning rate when dealing with large-scale networks. using this algorithm, we can easily compress the total parameters of the two classic networks of Lenet and alexnet by 108 times times and 17.7 times times respectively.

Guo Yiwen, Chen Yurong and Liao Anbong, cognitive Computing Laboratory, Intel China Research Institute

The algorithm adopts the strategy of pruning and grafting, and the synchronization of training and compression to complete the network compression task. Through the introduction of network grafting operation, the performance loss caused by error pruning is avoided, which can better approximate the theoretical limit of network compression in practical operation.

Strategies for Surgical surgery

(The dashed line represents the network connection that is currently being severed, while the green lines represent the network connection that was re-established by grafting [2])

Currently, the method has been written and will be published at the nips meeting this year. As a top-level conference in machine learning, Nips has maintained a very low acceptance rate of papers in recent years. The published papers will also receive great attention from the industry, which has a considerable impetus to the development of machine learning. It is reported that the total amount of manuscripts received by nips this year is about 2,500, and the receiving rate is less than 23%. It is believed that the work of the Intel China Institute will have a huge impact on academia and industry.

Photo Source:

[1] Han, Song, et al. "Learning both weights and connections for efficient neural networks." Advances in neural information processing Systems, 2015.

[2] Guo, Yiwen, Yao, Anbang and Chen, Yurong. "Dynamic Network surgery for efficient Dnns." ARIXV preprint, Arxiv:1608.04493v1, 2016.

[3] Yu, Kai. "A tutorial on the deep learning." China Workshop on machine learning and applications, 2012.

Lei Feng Network Note: This article by the Deep Learning journal ER authorized Lei Feng Network (search "Lei Feng Network" public attention) released, if you need to reprint please contact the original author, and noted that the author and the source shall not be deleted content.

NIPS 2016 article: Intel China Research Institute on Neural Network compression algorithm of the latest achievements

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.