First, the main idea of this paper
Considering that the size of the input image of the traditional CNN architecture is fixed (for example: 256*256), this artificially changing the size of the input image destroys the scale and length-to-width ratio of the input image. The author considers that the input size of the convolution layer can be arbitrary, and the input of the fully connected layer is fixed. To solve this problem, the author proposes spatial pyramid Pooling (spp-net) structure, which is 30-170 times faster than R-CNN in target detection.
Ii. Advantages ofspatial pyramid Pooling (spp-net)
1, for different size of the input can get the same dimension of output, and Siding window pooling do not;
2, SPP uses multi-level spatial bins, and Siding window pooling a single windows, multi-level on the target deformation is very robust;
3, because of the variability of the input size, SPP can extract the characteristics of different scales.
Third, Deep Networks with Spatial Pyramid Pooling
The process of feature extraction is actually very simple, that is, the SPP is placed on the last layer of the convolution layer, before the pooling layer, will be equivalent to using SPP instead of the last layer of pooling. Assume that the last layer of the convolutional layer has 256 maps, each with a size of a*a,n*n bins. Then use the max-pooling of window Win=ceil (a/n) and step Str=floor (a/n). Finally, all the features are cascaded together as input to the fully connected layer. This ensures that input to the full-join layer has the same size regardless of the size of the input image. as follows:
Iv. Results of the experiment
Judging from this result, there are indeed many improvements.
V. Summary
The idea of this thesis is mainly based on the idea of SPM, which combines CNN with SPM, which is worthy of reference. And when it comes to retraining models, it is the first time that a different size input is used for alternating training.