The previous model was fine-tuned using caffenet, but because the caffenet was too large for 220M, the test was too slow to change to googlenet.
1. Training
The 2,800-time iteration of the crash, about 20 minutes. The model is used 2000 times.
2. Testing
2.1 Test Batch Processing
New as file Test-trafficjambigdata03292057.bat in F:\caffe-master170309.
. \build\x64\debug\caffe.exe Test--model=models/bvlc_googlenet0329_1/train_val.prototxt-weights=models/bvlc_ Googlenet0329_1/bvlc_googlenet_iter_2000.caffemodel-gpu=0pause
The effect is as follows:
2.2 Test Single picture
Use the trained model below to test the accuracy of individual images. (Refer to the use of model)
Change Debug\classfication.bat as follows (test sheet F:\caffe-master170309\data\TrafficJamBigData03281545\test\du\190416357.png)
Manual operation is known to require 3:67 seconds, so the MFC program delay can be reduced to 4 seconds . This is much faster than the caffenet 11 seconds.
However, the effect is not allowed : 10 pictures (5 plug, 5 not blocked) all identified as non-blocking, the effect is as follows:
The above I think is the training model is not good, so retraining.
3. Retraining (Reference)
The following retraining googlenet models, mainly increase the number of iterations and batch_size.
3.1 Training data to LMDB format and mean file
As a result of previous use of caffenet, it was trained with 227*227 pictures (set Crop_size to 227). Tested with 227*227 pictures (directly converted from 480*480 to 227*227)
This needs to be re-trained with 480*480 size.
3.1.1 Reads the label of the picture and writes the picture name + tag to Train_label.txt and Test_label.txtin the F:\caffe-master170309\data\TrafficJamBigData03301009 folder there are 2 folders, 2 *.m, two empty txt. Get Tags Train_label.txt and Test_label.txt
3.1.2 Conversion to Lmdb format
F:\caffe-master170309\Build\x64\Debug under the new Convert-trafficjambigdata03301009-train.bat, the contents are as follows:
F:/caffe-master170309/build/x64/debug/convert_imageset.exe--shuffle--resize_width=480--resize_height=480 F:/ Caffe-master170309/data/trafficjambigdata03301009/f:/caffe-master170309/data/trafficjambigdata03301009/train_ Label.txt f:/caffe-master170309/data/trafficjambigdata03301009/trafficjambigdata03301009-train_lmdb-backend= Lmdbpause
F:\caffe-master170309\Build\x64\Debug under the new Convert-trafficjambigdata03281545-test.bat, the content is as follows:
F:/caffe-master170309/build/x64/debug/convert_imageset.exe--shuffle--resize_width=480--resize_height=480 F:/ Caffe-master170309/data/trafficjambigdata03301009/f:/caffe-master170309/data/trafficjambigdata03301009/test_ Label.txt f:/caffe-master170309/data/trafficjambigdata03301009/trafficjambigdata03301009-test_lmdb-backend= Lmdbpause
run separately, the effect is as follows, and the TrafficJamBigData03301009is generated within the F:\caffe-master170309\data\TrafficJamBigData03301009 folder- Train_lmdb and TrafficJamBigData03301009-test_lmdb folder: as follows: (8 files)
3.1.3 Generating a mean file
F:\caffe-master170309\Build\x64\Debug under the new Mean-trafficjambigdata03301009.bat, the contents are as follows:
3.1.4 Copy Files
Create a new TrafficJamBigData03301009 folder under Caffe-master170309/examples.
put the Debug/mean.binaryproto and F:\caffe-master170309\data\ that were just generatedTrafficJamBigData03301009\ TrafficJamBigData03301009-train_lmdb and F:\caffe-master170309\data\TrafficJamBigData03301009\ TrafficJamBigData03301009-test_lmdb Copy to caffe-master170309/examples/TrafficJamBigData03301009 .
Modify F:\caffe-master170309\examples\TrafficJamBigData03301009\synset_ Words.txt for plugging and not plugging two categories (note BuDu in the first row, du in the second row, because to the front of the label.txt corresponding)
3.1.5 New File
New in caffe-master170309/examples/TrafficJamBigData03301009.
Empty RecognizeResultRecordFromCmdTxt.txt.
and the Empty AnalysisOfRecognitionfromCmdTxt.txt
and Synset_words.txt (inside the first line is BuDu, the second row is du)
The effect is as follows:
3.2 Modifying the "parameter file" & "model structure" of training
3.2.1 Writing Training Bat (do not rush to run) Reference 1, reference 2, refer to my fine-tuning notes
Create a new Train-trafficjambigdata03301009.bat file under the F:\caffe-master170309 folder to train the model content as follows:
. \build\x64\debug\caffe.exe Train--solver=models/bvlc_googlenet0329_1/solver.prototxt--WEIGHTS=MODELS/BVLC _googlenet0329_1/bvlc_googlenet.caffemodel --gpu 0pause
3.2.2 Parameter file solver.prototxt refer to my fine-tuning notes
test_iter:100# original 1000, changed to 100test_interval:1000#test_interval:4000->1000test_initialization:falsedisplay: 40average_loss:40base_lr:0.01# original base_lr:0.01# from Quick_solver is lr_policy: "Poly" #来自quick_solver的是 Power:0.5lr_policy: "Step" stepsize:320000gamma:0.96max_iter:50000#max_iter:10000000->10000momentum:0.9weight _decay:0.0002snapshot:1000#snapshot:40000->1000snapshot_prefix: "Models/bvlc_googlenet0329_1/bvlc_googlenet" Solver_mode:gpu
3.2.3 Network model file Train_val.prototxt refer to my fine-tuning notes
Where Finetune appears error=cudasuccess (2 vs. 0) out of memory? Online means: Batch_size to small, the batch_size from the original 256 changed to 50, from the original 50 to 10.
(The entire network structure has 2000 lines, only changed the front 2 layers and the last 1 layers), the modified part is truncated as follows:
The first 2 layers are:
Name: "googlenet" layer { name: "Data" Type: "Data" Top: "Data" Top: "label" include { phase: TRAIN } transform_param { mirror:true# original is also true crop_size:480# originally 224 mean_value:104 mean_value:117 mean_value:123 } data_param { Source: "data/trafficjambigdata03281545/ Trafficjambigdata03281545-train_lmdb " batch_size:10# originally is Backend:lmdb }}layer { name:" Data " Type:" Data " Top:" Data " Top:" label " include { phase:test } Transform_param { mirror:false crop_size:480# turned out to be 224 mean_value:104 mean_value:117 mean_value:123 } Data_param { Source: "Data/trafficjambigdata03281545/trafficjambigdata03281545-test_lmdb" Batch _size:8# originally was a Backend:lmdb }}
The last 1 layers are:
Layer { name: "loss3/top-5" type: "Accuracy" bottom: "loss3/classifier123" #原来是 (3): Loss3/classifier loss2/classifier loss1/classifier Bottom: "Label" Top: "loss3/top-5" include { phase: TEST } accuracy_param { top_k:2# was originally 5 }}
3.2.4 Test model file Deploy.prototxt reference my fine-tuning notes
The 1th and penultimate 2nd levels are as follows:
The 1th floor is as follows:
Name: "googlenet" layer { name: "Data" type: "Input" Top: "Data" Input_param {shape: {dim:10 Dim:3 Dim: 480 dim:480}}# Input_param {shape: {dim:10 dim:3 dim:224 dim:224}}}
The penultimate layer is as follows:
Layer { name: "loss3/classifier123" type: "Innerproduct" Bottom: "pool5/7x7_s1" Top: "loss3/ classifier123 " param { lr_mult:1 decay_mult:1 } param { lr_mult:2 decay_mult:0 } Inner_product_param { num_output:2# originally was weight_filler { type: ' Xavier ' } bias_filler { type: "Constant" value:0 } }}layer { name: "Prob" type: "Softmax" Bottom: " Loss3/classifier123 " top:" Prob "}
3.2.5 Run F:\caffe-master170309\train-TrafficJamBigData03301009.bat file and start training
1200 Images (1200 training + 200 tests), iteration 50,000 times, batch_size from "32" to "10and 8"
The training records are as follows:
caffe-5.2-(GPU complete process) training (based on googlenet, alexnet fine tuning)