3D densely convolutional Networks for volumetric segmentation
Toan Duc Bui, Jitae Shin, and Taesup Moon?
School of Electronic and electrical Engineering, Sungkyunkwan University, Republic of Korea
Task :
Six months baby brain Division (four classification) white matter (WM), gray Mater (GM), cerebrospinal fluid (CSF) and background (BG) regions.
Data :
Public 6-month Infant Brain MRI Segmentation Challenge (ISEG) dataset
http://iseg2017.web.unc.edu/
Ten training samples and testing samples.
Each sample includes a T1 image, a T2 image.
Method :
Using densenet as the base network structure, 3D convolution is used, and the model parameters are reduced before 3*3*3 by 1*1*1;
The loss of space location information is reduced by stride=2 convolution instead of pooling;
Dropout prevent overfitting (after each 3*3*3 conv), rate=0.2;
T1 T2 respectively whitening operation as input, size cut to 64*64*64 input (limited by GPU);
ADAM,MINI-BATCH=4,LR=0.0002, LR by 0.1 per 50,000 iterations;
The results of the overlap part of the voting decision.
Evaluation Index :
Dice coefficient (DC),
Modified Hausdorff Distance (MHD)
Average Surface Distance (ASD).
Private Summary (this summary does not have universality, give yourself to see ... do not like to spray):
There is no particularly significant innovation, but there is at least a place for reference.
First of all, a bit of a question: 3D segmentation has always been a problem, can be trained less data, and the method used in the text is very small, although the overlap cut (not specified the size of the original T1 size and overlap), but the data extension is not mentioned, It is not known whether the training data is enough after cropping or the other means of expansion.
Personal habits with 2D means of training, can be used to learn from the place:
1, convolution instead of pooling
2, Densenet as the basic backbone network
3, dropout (usually do not use the basic segmentation dropout, can try)
"Reading notes" 3D densely convolutional Networks for volumetric segmentation