on the Internet to see a bunch of pictures to Lmdb format, and then test the total accuracy rate, I want to test each picture of the TOP1,TOP2 and the corresponding confidence level is how much, groping for an afternoon + one night Finally, during the encounter a lot of pits ... At the same time, thanks to the lab doctor brother to help me find bugs
Description : The data set is the Shanghai Bot competition (12 kinds of animals), download the vgg16 weight file online, and modify the output category of 12, the last three-layer full connection Network training 8 hours, TOP1 accuracy rate of 80%,TOP5 accuracy
The test picture used is a giraffe, the category number is 8, the result is as follows:
Forecast Source
#coding: Utf-8 import numpy as NP import Caffe Bot_data_root = ' f:/bot_data ' # SET network structure net_file = bot_data_root + '/MYVGG1 6/vgg_ilsvrc_16_layers_deploy.prototxt ' # Add the network weight parameter after training Caffe_model = Bot_data_root + '/myvgg16/myvggmodel__iter_ 80000.caffemodel ' # mean file Mean_file = bot_data_root + '/myvgg16/mean.npy ' # set up to use GPU Caffe.set_mode_gpu () # to construct a net NET = C Affe.net (Net_file, Caffe_model, Caffe.
TEST) # Gets the shape of the data, where the picture is the default matplotlib bottom loaded transformer = Caffe.io.Transformer ({' Data ': net.blobs[' data '].data.shape}) # Matplotlib loaded image is pixel [0-1], picture data format [Weight,high,channels],rgb # Caffe loading picture required is [0-255] pixel, data format [Channels,weight, HIGH],BGR, then you need to convert # Channel to the front transformer.set_transpose (' Data ', (2, 0, 1)) Transformer.set_mean (' Data ', np.load (mean _file). Mean (1). Mean (1)) # picture pixel magnified to [0-255] Transformer.set_raw_scale (' data ', 255) # RGB-->BGR Conversion transformer.set_ Channel_swap (' Data ', (2, 1, 0)) #设置输入的图片shape, 1, 3 channels, long width are 224 net.blobs[' data '].reshape (1, 3, 224, 224) # load Picture im = CAFFE.I O.load_image (Bot_data_Root + '/test_min/testset 1/0a3e66aea7f64597ad851bfffb929c5a.png ') # Use the transformer.preprocess above to handle just loading picture net.blobs['
Data '].data[] = transformer.preprocess (' data ', IM) #输出每层网络的name和shape for layer_name, blob in Net.blobs.iteritems (): Print Layer_name + ' \ t ' + str (blob.data.shape) # The network began to spread forward output = Net.forward () # Find the biggest probability output_prob = output[' The category of Out '][0] print ' is: ', Output_prob.argmax () # Find the category and probability of the most likely first two names top_inds = Output_prob.argsort () [:: -1][:2] Print "forecast most likely
The number of the first two names: ", top_inds print" corresponding to the category of probability is: ", Output_prob[top_inds[0]], output_prob[top_inds[1]]
Network Structure Code
Name: "Vgg_ilsvrc_16_layers" Input: "Data" input_dim:1 input_dim:3 input_dim:224 input_dim:224 layers {bottom: "Dat A "top:" conv1_1 "Name:" Conv1_1 "type:convolution convolution_param {num_output:64 pad:1 kernel_s Ize:3} blobs_lr:0 blobs_lr:0} layers {bottom: "conv1_1" Top: "conv1_1" Name: "Relu1_1" Type:relu} LA
yers {bottom: "conv1_1" Top: "conv1_2" Name: "Conv1_2" Type:convolution convolution_param {num_output:64 Pad:1 Kernel_size:3} blobs_lr:0 blobs_lr:0} layers {bottom: "conv1_2" Top: "conv1_2" Name: "Re Lu1_2 ' Type:relu} layers {bottom: ' conv1_2 ' top: ' pool1 ' name: ' pool1 ' type:pooling pooling_param {PO Ol:max kernel_size:2 Stride:2} layers {bottom: "pool1" Top: "conv2_1" Name: "Conv2_1" Type:convo
lution Convolution_param {num_output:128 pad:1 kernel_size:3} blobs_lr:0 blobs_lr:0} layers { Bottom: "Conv2_1" toP: "Conv2_1" Name: "Relu2_1" Type:relu} layers {bottom: "conv2_1" Top: "conv2_2" Name: "Conv2_2" Type:conv
Olution Convolution_param {num_output:128 pad:1 kernel_size:3} blobs_lr:0 blobs_lr:0} layers { Bottom: "conv2_2" Top: "conv2_2" Name: "Relu2_2" Type:relu} layers {bottom: ' conv2_2 ' top: ' pool2 ' name:
"Pool2" type:pooling pooling_param {Pool:max kernel_size:2 Stride:2}} layers {bottom: "pool2" Top: "Conv3_1" Name: "Conv3_1" type:convolution convolution_param {num_output:256 pad:1 Ze:3} blobs_lr:0 blobs_lr:0} layers {bottom: ' conv3_1 ' top: ' conv3_1 ' name: ' Relu3_1 ' Type:relu} Lay
ers {bottom: "conv3_1" Top: "conv3_2" Name: "Conv3_2" Type:convolution convolution_param {num_output:256 Pad:1 Kernel_size:3} blobs_lr:0 blobs_lr:0} layers {bottom: "conv3_2" Top: "conv3_2" Name: "Re Lu3_2 "Type:relu} LAyers {bottom: "conv3_2" Top: "conv3_3" Name: "Conv3_3" Type:convolution convolution_param {num_output:25 6 pad:1 Kernel_size:3} blobs_lr:0 blobs_lr:0} layers {bottom: "conv3_3" Top: "conv3_3" Name: "R Elu3_3 ' Type:relu} layers {bottom: ' conv3_3 ' top: ' pool3 ' name: ' pool3 ' type:pooling pooling_param {p Ool:max kernel_size:2 Stride:2} layers {bottom: "pool3" Top: "conv4_1" Name: "Conv4_1" Type:conv
Olution Convolution_param {num_output:512 pad:1 kernel_size:3} blobs_lr:0 blobs_lr:0} layers { Bottom: "Conv4_1" Top: "conv4_1" Name: "Relu4_1" Type:relu} layers {bottom: "conv4_1" Top: "Conv4_2" nam
E: "Conv4_2" type:convolution convolution_param {num_output:512 pad:1 kernel_size:3} blobs_lr:0 blobs_lr:0} layers {bottom: "conv4_2" Top: "conv4_2" Name: "Relu4_2" Type:relu} layers {bottom: "conv4_ 2 "Top:" Conv4_3 "NAMe: "Conv4_3" type:convolution convolution_param {num_output:512 pad:1 kernel_size:3} BLOBS_LR: 0 blobs_lr:0} layers {bottom: "Conv4_3" Top: "Conv4_3" Name: "Relu4_3" Type:relu} layers {bottom: "conv4
_3 "Top: Pool4" Name: "Pool4" type:pooling pooling_param {Pool:max Kernel_size:2}} Layers {bottom: "pool4" Top: "Conv5_1" Name: "Conv5_1" Type:convolution convolution_param {num_output:51 2 pad:1 Kernel_size:3} blobs_lr:0 blobs_lr:0} layers {bottom: "Conv5_1" Top: "Conv5_1" Name: "R Elu5_1 ' Type:relu} layers {bottom: ' conv5_1 ' top: ' conv5_2 ' name: ' Conv5_2 ' type:convolution
Aram {num_output:512 pad:1 kernel_size:3} blobs_lr:0 blobs_lr:0} layers {bottom: "Conv5_2" Top: ' conv5_2 ' name: ' Relu5_2 ' Type:relu} layers {bottom: ' conv5_2 ' top: ' conv5_3 ' name: ' Conv5_3 ' Type:co Nvolution Convolution_param {num_output:512 pad:1 kernel_size:3} blobs_lr:0 blobs_lr:0} layers {bottom: "Conv5_3" Top: ' conv5_3 ' name: ' Relu5_3 ' Type:relu} layers {bottom: ' conv5_3 ' top: ' pool5 ' name: ' Pool5 ' Type:pooli NG Pooling_param {pool:max kernel_size:2 stride:2} layers {bottom: "pool5" Top: "FC6" Name: ' Fc6 ' type:inner_product inner_product_param {num_output:4096} blobs_lr:1 Blobs_lr:2} layers {Bott Om: "Fc6" Top: "FC6" Name: "Relu6" Type:relu} layers {bottom: ' fc6 ' top: ' fc6 ' name: ' DROP6 ' Type:dropou
T Dropout_param {dropout_ratio:0.5}} layers {bottom: ' fc6 ' top: ' fc7 ' name: ' Fc7 ' type:inner_product Inner_product_param {num_output:4096} blobs_lr:1 Blobs_lr:2} layers {bottom: "fc7" Top: "Fc7" NA Me: "Relu7" Type:relu} layers {bottom: "fc7" Top: "FC7" Name: "DROP7" Type:dropout dropout_param {DRO pout_ratio:0.5}} LAyers {name: "Myfc8" bottom: "fc7" Top: "Myfc8" type:inner_product inner_product_param {num_output:12
Weight_filler {type: ' Gaussian ' std:0.01} bias_filler {type: ' constant ' value:0}} blobs_lr:10 blobs_lr:20} layers {bottom: ' myfc8 ' top: ' Out ' name: ' Out ' TYPE:SOFTM AX}