Deeplab Operating Guide

Source: Internet
Author: User
Tags eval mkdir pytorch

The following is only a summary, referring to the many online information, only forget. main link Deeplab home: http://liangchiehchen.com/projects/DeepLab.html Official code: https://bitbucket.org/aquariusjay/ Deeplab-public-ver2 Python version Caffe implementation: HTTPS://GITHUB.COM/THELEGENDALI/DEEPLAB-CONTEXT2 model download:/HTTP liangchiehchen.com/projects/deeplab_models.html DEEPLABV2_VGG16 Pre-training model DEEPLABV2_RESNET101 pre-training model Pytorch implementation of deeplab:https://github.com/isht7/pytorch-deeplab-resnet online open source code: Martinkersner/train-deeplab Main operating steps

Below we mainly in a version of online open source explanation: Https://github.com/xmojiao/deeplab_v2.
The main steps can be consulted:
1. Image Semantic Segmentation: training from the beginning Deeplab V2 series "Source parsing"
2. Image Semantic Segmentation: Training Deeplab V2 Series II "VOC2012 datasets" from the beginning
3. Deeplab V2 Commissioning process (Ubuntu 16.04+cuda8.0)

Here are some of the problems that I have encountered that have not been mentioned above.
1. Install Matio:
Matio-1.5.2.tar.gz is used in the above data, but I can't install it, it may not be compatible with my library, so I downloaded the latest matio-1.5.11 and ran the installation as follows:

CD matio-1.5.11 
./configure--prefix=/data1/... (Fill in your own installation directory)
Make make 
check (can be slightly) make
install 

finally add ld_library_path=/your/path/to/libmatio.so.2 on bash.rc

Reference: http://blog.csdn.net/houqiqi/article/details/46469981
2. The use of Caffe version of the older, resulting in a lot of incompatible with the latest environment, I use cudnn6.0,cuda8.0
Appear:

./include/caffe/util/cudnn.hpp:in function ' void Caffe::cudnn::createpoolingdesc (cudnnpoolingstruct**, Caffe:: Poolingparameter_poolmethod, cudnnpoolingmode_t*, int, int., int., int, int, int) ':
./include/caffe/util/cudnn.hpp : 127:41:error:too few arguments to function ' cudnnstatus_t cudnnsetpooling2ddescriptor (cudnnpoolingdescriptor_t, cudnnpoolingmode_t, cudnnnanpropagation_t, int, int., int, int, int, int) '
         pad_h, Pad_w, Stride_h, Stride_w);

This is due to inconsistencies in the CUDNN version used, the author configuration environment is CUDNN 4.0, but the CUDNN interface after the 5.0 version has changed.

Workaround: Replace and recompile the following files with the latest BVLC version of the caffe corresponding file

./include/caffe/util/cudnn.hpp.
/include/caffe/layers/cudnn_conv_layer.hpp
./include/caffe/layers/ CUDNN_RELU_LAYER.HPP
./include/caffe/layers/cudnn_sigmoid_layer.hpp
./include/caffe/layers/cudnn_tanh_ Layer.hpp

./src/caffe/layers/cudnn_conv_layer.cpp.
/src/caffe/layers/cudnn_conv_layer.cu.
/src/ Caffe/layers/cudnn_relu_layer.cpp
./src/caffe/layers/cudnn_relu_layer.cu
./src/caffe/layers/cudnn_ Sigmoid_layer.cpp
./src/caffe/layers/cudnn_sigmoid_layer.cu
./src/caffe/layers/cudnn_tanh_layer.cpp
./src/caffe/layers/cudnn_tanh_layer.cu

Reference: http://blog.csdn.net/tianrolin/article/details/71246472
3. How to solve the problem of Deeplab v2 recognition results for All black images.
As the author says: http://liangchiehchen.com/projects/DeepLab_FAQ.html

Q:when evaluating the Deeplab outputs (without CRF), I got all-background results (i.e., All Black results). Is there anything wrong?

A:please double check if the name of your FC8 is FC8_VOC12 in the generated test_val.prototxt or Test_test.prototxt (afte R running run_pascal.sh). The name should is matched for initialization.

The author's pre-trained model is largely biased against the model you actually tested. The main problem is FC8.
In Https://github.com/xmojiao/deeplab_v2, for example, if you use this code directly, run_pascal.sh appears in the VOC12 directory and exp2= on run_pascal.sh. , which is different from the official presets. Official code, assuming that the run_pascal.sh should appear in the VOC12 level directory. This will cause the fc8_voc12_1,fc8_voc12_2,fc8_voc12_3,fc8_voc12_4 to be ignored when the final test is in progress.

In addition, because the test needs to copy Test.prototxt to Test_val.prototxt, it should be mainly modified test.prototxt.
Test.prototxt changes as follows

Layer {
  Name: "Data"
  type: "Imagesegdata" Top: "Data" top: "
  label"
  Top: "Data_dim"
  include {
    phase:test
  }
  Transform_param {
    mirror:false
    crop_size:513
    mean_value:104.008
    mean_value:116.669
    mean _value:122.675
  }
  image_data_param {
    root_folder: "${data_root}"
    Source: ". /${exp}/list/${test_set}.txt "  (change)
    batch_size:1
    label_type:none
  }
}

Run_pascal.sh changes as follows

#!/bin/sh # # MODIFY PATH for YOUR SETTING root_dir=/data1/caiyong.wang/data/deeplab_data caffe_dir=.   /deeplab-public-ver2 caffe_bin=${caffe_dir}/.build_release/tools/caffe.bin exp=voc12 #适应原始训练好的模型目录 (change) EXP2=. #当前目录下 (change) if ["${exp2}" = "."]; Then num_labels=21 data_root=${root_dir}/voc_aug/dataset/else num_labels=0 echo "wrong EXP name" FI # # Specify which model to train ########### voc12 ################ Net_id=deeplab_largefov # # Variables used for weakly or semi-supervisedly Training #TRAIN_SET_SUFFIX = Train_set_suffix=_aug #TRAIN_SET_STRONG =train #TRAIN_SET_STRONG = train200 #TRAIN_SET_STRONG =train500 #TRAIN_SET_STRONG =train1000 #TRAIN_SET_STRONG =train750 #TRAIN_SET_WEAK_LEN = Dev_id=0 ##### # # Create dirs config_dir=${exp2}/config/${net_id} model_dir=${exp2}/model/${net_id} mkdir-p ${MO 
Del_dir} log_dir=${exp2}/log/${net_id} mkdir-p ${log_dir} export Glog_log_dir=${log_dir} # # Run run_train=0 RUN_TEST=1 Run_train2=0 run_tEst2=0 # # Training #1 (on Train_aug) if [${run_train}-eq 1]; Then # list_dir=${exp2}/list Train_set=train${train_set_suffix} If [-Z ${train_set_weak_len}]; Then Train_set_weak=${train_set}_diff_${train_set_strong} comm-3 ${list_dir}/${train_set} . txt ${list_dir}/${train_set_strong}.txt > ${list_dir}/${train_set_weak}.txt Else train_set_weak=${ Train_set}_diff_${train_set_strong}_head${train_set_weak_len} comm-3 ${list_dir}/${train_set}.txt ${LIST_ Dir}/${train_set_strong}.txt | Head-n ${train_set_weak_len} > ${list_dir}/${train_set_weak}.txt fi # model=${exp2}/model/${net_id}/init.c Affemodel # echo Training net ${exp2}/${net_id} for pname in train solver; Do sed "$ (eval echo $ (cat sub.sed)" \ ${config_dir}/${pname}.prototxt > ${conf Ig_dir}/${pname}_${train_set}.prototxt done cmd= "${caffe_bin} TRAIN \--Solver=${config_dir}/solver_${train_set}.prototxt \--gpu=${dev_id} "if [-f ${model}]; Then cmd= "${cmd}--weights=${model}" fi echo Running ${cmd} && ${cmd} fi # # Test #1 specification (on Val or test) if [${run_test}-eq 1]; Then # for Test_set in Val; Do test_iter= ' Cat ${exp2}/list/${test_set}.txt | Wc-l ' Model=${exp2}/model/${net_id}/test.caffemodel if [!-f ${model}]; Then model= ' Ls-t ${exp2}/model/${net_id}/train_iter_*.caffemodel | Head-n 1 ' fi # echo testing net ${exp2}/${net_id} FEATURE _DIR=${EXP2}/FEATURES/${NET_ID} mkdir-p ${feature_dir}/${test_set}/fc8 mkdir-p ${feature_dir}/${ TEST_SET}/FC9 mkdir-p ${feature_dir}/${test_set}/seg_score sed "$ (eval echo $ (cat sub.sed )) "\ ${config_dir}/tEst.prototxt > ${config_dir}/test_${test_set}.prototxt cmd= "${caffe_bin} test \--model=${ Config_dir}/test_${test_set}.prototxt \--weights=${model} \--gpu=${dev_id} \--it Erations=${test_iter} "Echo Running ${cmd} && ${cmd} done fi # Training #2 (Finetune on Trai Nval_aug) if [${run_train2}-eq 1]; Then # list_dir=${exp2}/list Train_set=trainval${train_set_suffix} If [-Z ${train_set_weak_len}]; Then Train_set_weak=${train_set}_diff_${train_set_strong} comm-3 ${list_dir}/${train_set} . txt ${list_dir}/${train_set_strong}.txt > ${list_dir}/${train_set_weak}.txt Else train_set_weak=${ Train_set}_diff_${train_set_strong}_head${train_set_weak_len} comm-3 ${list_dir}/${train_set}.txt ${LIST_ Dir}/${train_set_strong}.txt |
Head-n ${train_set_weak_len} > ${list_dir}/${train_set_weak}.txt fi #    Model=${exp2}/model/${net_id}/init2.caffemodel if [!-f ${model}]; Then model= ' Ls-t ${exp2}/model/${net_id}/train_iter_*.caffemodel | Head-n 1 ' fi # echo Training2 net ${exp2}/${net_id} for pname in train solver2; Do sed "$ (eval echo $ (cat sub.sed)" \ ${config_dir}/${pname}.prototxt > ${conf Ig_dir}/${pname}_${train_set}.prototxt done cmd= "${caffe_bin} TRAIN \--solver=${config_dir}/solver2_${tr
Ain_set}.prototxt \--weights=${model} \--gpu=${dev_id} "Echo Running ${cmd} && ${cmd} Fi # # Test #2 on official test set if [${run_test2}-eq 1]; Then # for Test_set in Val TEST; Do test_iter= ' Cat ${exp2}/list/${test_set}.txt | Wc-l ' Model=${exp2}/model/${net_id}/test2.caffemodel if [!-f ${model}]; Then model= ' Ls-t ${exp2}/model/${net_id}/train2_iter_*.caffemodel | Head-N 1 ' fi # echo Testing2 net ${exp2}/${net_id} feature_dir =${EXP2}/FEATURES2/${NET_ID} mkdir-p ${feature_dir}/${test_set}/fc8 mkdir-p ${feature_di  R}/${TEST_SET}/CRF sed "$ (eval echo $ (cat sub.sed))" \ ${config_dir}/test.prototxt > ${config_dir}/test_${test_set}.prototxt cmd= "${caffe_bin} test \--model=${config_dir}/ Test_${test_set}.prototxt \--weights=${model} \--gpu=${dev_id} \--iterations=${t
 Est_iter} "Echo Running ${cmd} && ${cmd} done fi

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.