Project homepage: Https://github.com/hszhao/PSPNet 1 Summary rank 1 on PASCAL VOC 2012 ETC Multiple benchmark (information up to 2016.12.16)
Http://host.robots.ox.ac.uk:8080/leaderboard/displaylb.php?cls=mean&challengeid=11&compid=6&submid =8822#key_pspnet leverages the global context information by different-region-based context aggregation (pyramid pooling) 1 Introduction
DataSet :
LMO DataSet [22]
PASCAL context Datasets [8, 29]
ade20k DataSet [43]
The mainstream scene parsing algorithm is based on FCN (full convolution network), the problem is not using the whole scene information
This paper presents the pyramid Scene Parsing network (pspnet)
Based on dilated FCN [3,40], (pixel-level)
Main contributions :
Put forward pyramid scene parsing network
Proposed effective optimization strategy for deep ResNet [a] based on deeply supervised loss
Establish a system 2 related work
Development process: First FC replaced with Conv, and then dilated conv[3,40], deconvolution achieved from coarse to fine [30]
The work of this paper is based on FCN and dilated work "3,26"
Related work is based on multi-scale (because high-level general correspondence is semantic information and low-level corresponding position information), the other way is structure prediction[3], using CRF as the reprocessing.
[24] It is pointed out that global average pooling with FCN can improve the segmentation effect, but this paper finds that it is not effective in the complex scene, so the different-region-based context aggregation is proposed. 3 Pyramid Scene Parsing network
Basic framework:
The global average pool [34,13,24] still has some limitations.
Pyramid Pool module: 4 floors
The thickest layer is the global pooling, gets a single bin, and the other layer gets the sub-region, so that the feature map obtained after the pool is different in size.
Next, make a 1*1 convolution for each pyramid level, and change the dimension of the context representation to the original 1/n,n is the layer of the pyramid.
Then, the low dimensional feature map is sampled directly and the original size is obtained.
Finally, the feature of different layers are connected to the conv output after the convolution. 3.3 Network Architecture
Using a pre-trained network ResNet [13] and adding dilated network to extract the feature map, the size of the feature map is 1/8 of the original figure (which is explained in Deeplab).
Using the 4-layer pyramid model, the final link is made through convolution. 4 Deep Supervision for resnet-based FCN
We all know that the residual network with the skip conntection to reduce the depth of some of the network optimization problems, the latter layer is mainly to learn the previous layer of residuals.
And this work, the author proposes to add an additional Loss,and learning the residue afterwards with the final loss. This network can be divided into two relatively simple optimization sections.
5 Experiments 5.1 Implementation
based on Caffe
-"Poly" Learning rate Policy: (1-iter/maxiter) ^power
where base lr=0.01 power=0.9
-Momentum =0.9
-Weight decay=0.0001
-Data augmentation: Random mirrors, resize (0.5~2), rotating: -10~10°
-Batchsize:16
-Auxiliary loss:weight=0.4
Effect of the lever.
Reference Documents
[3] L. Chen, G. Papandreou, I. Kokkinos, K. Murphy, and A. L. Yuille. Semantic image segmentation with deep convolutional nets and fully connected. CoRR, abs/1412.7062, 2014.
[4] L. Chen, G. Papandreou, I. Kokkinos, K. Murphy, and A. L. Yuille. Deeplab:semantic image segmentation with deep convolutional nets, atrous convolution, and fully connected
CRFs. CoRR, abs/1606.00915, 2016. 5
[A] K. He, X. Zhang, S. Ren, and J. Sun. Spatial pyramid Pooling in deep convolutional for visual networks. In ECCV, pages 346–361, 2014. 1, 3
[a] W. Liu, A. Rabinovich, and A. c. Berg. Parsenet:looking wider to the better. CoRR, abs/1506.04579, 2015
[d] J. Long, E. Shelhamer, and T. Darrell. Fully convolutional networks for semantic segmentation. In CVPR, pages 3431–3440, 2015
[V] F. Yu and V. Koltun. Multi-scale context aggregation by dilated convolutions. CoRR, abs/1511.07122, 2015