Network in Network notes

Source: Internet
Author: User

  • The convolution kernel in the traditional CNN is a nonlinear excitation function after a generalized linear model (GLM) after a sigmoid (now usually relu), assuming that the convolution has k filter, then this k filter represents the characteristics that should be divided, or different variants that belong to the same category are invariant.
  • But modeled by GLM, which assumes that the latent filter is linearly divided, it is clear that such assumptions are not always true or even common.
  • The traditional CNN, in order to solve this kind of problem, often choose more filter number, that is, the larger k value, to ensure that all the deformation is divided into the correct concept.
  • For example, people and cats are different concepts, but not linear, in order to avoid the wrong points, we only have to build more sub-concept (increase the number of filter), yellow, black and so on.
  • But too much filter can cause problems. There are two main problems: 1) First of all, the complexity of the number of such deformation is often too much, resulting in a sharp increase in parameters, 2) this to the next layer of network learning caused difficulties, because the role of the next layer of network is to combine this layer of network information to form a higher semantic information, Then for the current layer of the same concept of different variants of the filter, the next layer must be able to have some processing.
  • Maxout networks can be approximated by piecewise linear methods to represent a convex problem, but again, not all problems are convex.
  • This shows that we need a more general nonlinear convolution kernel, the micro network in the network in network, the multilayer perceptron (multilayer perceptron) mentioned in the article.
  • Multilayer Perceptron convolution core, actually equivalent to do a general convolution, and then do a few 1*1 convolution (only change the number of filter, do not change the size of feature map).
  • In this sense, it is equivalent to building a deeper network.
  • The rationality of Global average pooling is that, after a complex network of multiple layers, each filter represents high-level information, not low-level such as curves or textures. For this kind of high-level information, the whole feature map to do pooling equivalent to the detection of the map in this filter to detect the concept, the results of this pooling can be well used to do classification or detection of such work, You do not need to fully connected and then train the classifier.
  • To sum up, the network in network model is equivalent to a more layer of CNN, with multiple 1*1 convolution, which makes the individual convolution more powerful. Finally, the fully connected is replaced with average pooling, so the parameters of the model are greatly reduced and the overfitting problem is avoided. A deeper, but perhaps less parametric model (because fully connected has too many parameters)

Network in Network notes

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.