The citation for this paper is not high, but a very interesting point of view is to unify the whole link layer with the convolution layer. Many of the following classic network structures, including GOOGLENET,FCN, should be inspired by them. The author is Yinhui into a team, Caffe Model Zoo also see NIN figure, or very influential.Technical SummaryImproved the structure of traditional CNN. It is said that each convolution layer is replaced by a small number of multilayer fully connected neural networks (i.e., multilayer perceptron, MLP), which can be used to approximate any function rather than simply convolution operation. Eliminate the traditional CNN's full connection layer, the penultimate layer is feature maps, each feature map corresponds to a class, the last pair has been successive to the Softmax layer output various probabilities.Some details worth reflecting onThe specific structure of the NIN is 3 MLP layers, and each MLP layer contains a 3-layer fully connected network. Finally, a softmax layer. No pooling layer. According to the author, the MLP layer is actually a multilayer convolution layer, except for the first convolution, the remaining is 1*1 convolution. Here is a concept, including many of Daniel's essays are not strict, so that the concept of 1*1 convolution is very easy to confuse people. Convolution of the input object is a three-dimensional cube, where the two-dimensional image plane, the other dimension is the channel, so the 1*1 convolution accuracy is 1*1*n convolution, n is the number of channels. Only in the direction of the channel to do the convolution, the number of convolution kernel is assumed to be m, is equal to a from n neurons to m neurons in the full connection. The final classification of the cancellation of the full connection layer, is to remove a black box, each layer has practical significance. Looking at the final experiment, we can find that the feature maps in the penultimate layer is actually the response Getoux of each class, and because of the local convolution operation from beginning to end, this thermal map can compare the position of the target accurately and has the additional effect of target detection. The analogy FCN, each local block finally all maps to the class standard, therefore the natural finally forms the hot graph. Nin is a local mapping, but the final is to do the global pooling, and then map to the class standard, the method of forming a hot map is not FCN so direct. In addition to the last MLP layer, the other layers are dropout to prevent fitting. Global contrast normalization and ZCA whitening preprocessing were performed on the original images. Although I do not know how much the specific role.the reference pointThe structure becomes complex, and is sufficiently compatible with the original CNN basic module, which is easy to implement in a modular framework such as Caffe. Localized operation, full convolution, and structural deepening. Remove the full connected black box. Reduce the parameters.