Lightweight Network: Squeezenet

Source: Internet
Author: User

Squeezenet published in ICLR-2017, the authors from Berkeley and stanford,squeezenet are not model compression techniques, but "design strategies for CNN architectures with Few parameters "

Innovation point:
1. Different from the traditional convolution method (similar to the inception idea), the Fire Module;fire module is proposed to contain two parts: Squeeze layer +expand Layer

Innovation point inception thought too close, should not be a breakthrough. First squeeze layer, is 1*1 convolution, its volume kernel number is less than the previous layer of feature map number, this operation from the inception series began with, and called compression, but I think "compression" more appropriate.
Expand layer with 1*1 and 3*3 respectively convolution, and then concat, this operation inception series inside also have ah ...

—————————————— Split Line ————————————-

The core of Squeezenet is that Fire Module,fire module is composed of two layers, respectively, squeeze layer +expand layer, as shown in the following figure, the squeeze layer is a convolution layer of the 1*1 convolution nucleus, expand layer is 1*1 and 3* The convolution layer of 3 convolution core, expand Layer, 1*1 and 3*3 obtained feature map concat, the specific operation as shown in Figure 2

Fire module input feature map is h*w*m, the output feature map is h*m* (e1+e3), you can see that the resolution of the feature map is invariant, the change is only the dimension, that is, the number of channels, this is consistent with the idea of Vgg.

First of all, H*w*m's feature map through the squeeze layer, get S1 feature map, where the S1 are less than M, in order to achieve the purpose of "compression", detailed thinking can refer to Google's inception series.

Secondly, the feature map of H*W*S1 is input to the expand layer, and then the 1*1 convolution layer and the 3*3 convolution layer are rolled, then the result is concat, and the output of fire module is obtained, h*m* map of E1+e3 (feature).

The fire module has three adjustable parameters: S1,e1,e3, which represents the number of convolution cores, and also represents the dimension of corresponding output feature map, in the squeezenet structure proposed in this paper, E1=E3=4S1

Next look at squeezenet. Network structure:

First after Conv1, then is fire2-9, finally is a conv10, finally uses the global Avgpool to replace the FC layer to carry on the output;

The more detailed parameters are shown in the following figure:

look at the contrast between Squeezenet and alexnet:

The deep compression technology is used to compress the squeezenet, and the 0.5M model is finally obtained, and the model performance is not bad.
Deep compression is ICLR-2016 best paper. See MORE:
(https://arxiv.org/pdf/1510.00149.pdf)

Look at the picture and then look back at the title of the paper:
Squeezenet:alexnet-level accuracy with 50x fewer parameters and <0.5MB

The squeezenet is 50 times times less than the alexnet parameter, which is OK, and the third line on the bottom of the chart is visible; but, and < 0.5MB, this has nothing to do with squeezenet. It was obtained by other techniques. It's easy to assume that squeezenet can compress the model.

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.