A Mixed-scale dense convolutional neural network for image analysis
Published in PNAS on December 26, 2017
Available at PNAS online:https://doi.org/10.1073/pnas.1715832114
Danie L M. Pelt and James A. Sethian
Write in front: This method cannot be implemented using an existing framework such as TensorFlow or Caffe.
A rough summary:
Contribution:
A new neural network (based on void convolution and dense connection) is proposed to achieve better results in a network with fewer parameters and better training in the partitioning task.
Details:
The pixel-pixel is still essentially a split, but not on the sampling process.
The different channel in each layer uses a different dilation.
From input to output, each layer is of the same size, facilitating dense connections, which means that all the channels in front of all layers are available in the current operation. The authors argue that such processing maximizes the reuse of existing feature maps.
All layers use the void convolution of the 3*3, and the final layer uses the 1*1 convolution (equivalent to the linear combination of all the channels of all previous layers in the last layer).
The number of channels per layer is set to W, the number of hidden layers is assumed to be D, and the author gives examples of network connection methods such as:
Advantages:
The training is quick, the parameters are few, the risk of overfitting is small.
Disadvantages:
You cannot quickly build an implementation using an existing framework.
"Paper reading" A Mixed-scale dense convolutional neural network for image analysis