TensorFlow to use their own training good CPKT model, test identification _ depth Learning

Source: Internet
Author: User

Go out and talk about how to use TensorFlow to generate your own picture training model CPKT. This section describes how to use a trained CPKT model for test recognition.

Direct Line Code:

############################################################################################ #!/usr/bin/ python2.7 #-*-Coding:utf-8-*-#Author: Zhaoqinghui #Date: 2016.5.10 #Function: Test identification using the CPKT model ########## ################################################################################ Import TensorFlow as TF import  NumPy as NP import sys import OS #import math import cv2 from scipy import ndimage from skimage.filter.thresholding Import Threshold_adaptive Import Saveandrestorechar ######################################################### result_ label_index= ' Result_label.txt ' test_img_path= "./testimage/test" classnum=36 #####################################
	#################### #------Centric Operation--------def Getbestshift (img): CY,CX = Ndimage.measurements.center_of_mass (IMG) Rows,cols = Img.shape Shiftx = Np.round (COLS/2.0-CX). Astype (int) shifty = Np.round (rows/2.0-cy). Astype (int) return SHIF TX, Shifty def shift (Img,shiftx,shifty): rows, cols = Img.shape M = np.fLoat32 ([[[1,0,shiftx],[0,1,shifty]]]) shifted = Cv2.warpaffine (Img,m, (cols,rows)) return shifted #------ Read the corresponding number on the label------Def read_result_label_list (): Result_label_dir = [] Result_label = [] reader = open (Result_labe L_index) While 1:line = Reader.readline () tmp = Line.split ("") If not line:break Result_label_dir.append (tm P[1][0:-1]) for I in range (int (classnum)): Result_label.append (str (result_label_dir[i)) return Result_label # ------The recognition part of the picture------def processimageproposal (imageID): Print Test_img_path+str (imageID) + '. png ' Gray = cv2.imread (t Est_img_path+str (imageID) + '. png ', Cv2. Imread_grayscale) result_label=read_result_label_list () shiftx, shifty = Getbestshift (gray) Gray = shift (GRAY,SHIFTX , shifty) Flatten = Gray.flatten ()/255.0 pred = SaveandRestoreChar.sess.run (saveandrestorechar.predict,feed_dict={ saveandrestorechar.x:[flatten],saveandrestorechar.keep_prob:1.0}) print "Prediction:", int (pred[1]) Cv2.imshow (St R (imageID) +". png", Gray) Cv2.waitkey (0) def main (Argvs): imageID = argvs[1] #此处为参数设置 #imageid =12 print "image_id:", imageID PR
 Ocessimageproposal (imageID) if __name__ = = ' __main__ ': Main (SYS.ARGV)
Where result_label.txt is the correct result label; Test_img_path is the path to the test picture, using Result_label.txt to get the correct rate. Where the Saveandrestorechar file is:
############################################################################################ #!/usr/bin/ python2.7 #-*-Coding:utf-8-*-#Author: Zhaoqinghui #Date: 2016.5.10 #Function: Restore the model from CH Eckpoint ########################################################################################## Import TensorFlow as TF import numpy as NP import math classnum=36 x = Tf.placeholder (tf.float32,[none,28*28]) Y_ = Tf.placeholde R (Tf.float32,[none,classnum]) def weight_variable (shape): init = tf.truncated_normal (Shape,stddev = 0.1) return TF. Variable (INIT) def bias_variable (shape): init = tf.constant (0.1, shape = shape) return TF. Variable (INIT) # # Declare convolution operations and pool operations # The convolution operation declared here is a vanilla version with a step length of 1,padding of 0 # # Pool operation is a 2x2 max Pool def conv2d (x,w): # Strides: [Batch, In_height, In_width, In_channels] return tf.nn.conv2d (x,w,strides = [1,1,1,1],padding = ' SAME ') def maxpool2d (x) : Return Tf.nn.max_pool (x,ksize = [1,2,2,1], strides = [1,2,2,1],padding = ' SAME ') ## model Build Process # The first layer is [one convolution to a max pooling], the patch_size of the convolution layer is 5x5, the number of channels entered is 1 (because it is a grayscale map), the output is 32 feature maps # [5,5,1,32]: Patch_si Ze is 5x5, the number of input channels is 1, the number of output channels is 32 (here 32 is based on network definition, not calculated) X_image = Tf.reshape (x,[-1,28,28,1)) #把变成需要的格式 w_conv1 = Weight_ Variable ([5,5,1,32]) b_conv1 = Bias_variable ([#做相应的操作]), conv, relu, Maxpool h_conv1 = Tf.nn.relu (conv2d (x_image,w_ CONV1) +b_conv1) h_pool1 = maxpool2d (H_CONV1) # second tier [one convolution plus one maxpool] w_conv2 = weight_variable ([5,5,32,64]) B_conv2 = Bias_ Variable ([]) H_conv2 = Tf.nn.relu (conv2d (h_pool1,w_conv2) +b_conv2) h_pool2 = maxpool2d (h_conv2) # Full-join layer, total 1024 neurons, At this time the picture has two 2x2 maxpool, each step is 2, at this time the picture has become 7x7 w_fc1 = weight_variable ([7*7*64,1024]) B_FC1 = bias_variable ([1024]) H_ Pool2_flat = Tf.reshape (h_pool2,[-1,7*7*64]) H_fc1 = Tf.nn.relu (Tf.matmul (H_POOL2_FLAT,W_FC1) +b_fc1) # Add dropout in training,
Remember to close the test when ... # Keep_prob represents the possibility of a reserved parameter, when equal to 1.0 means not to dropout Keep_prob = Tf.placeholder ("float") #要输入的值 #keep_prob = 1 H_fc1_drop = tf.nn. Dropout (H_fc1,keep_prob) #添加 SoftmaX-Layer W_FC2 = Weight_variable ([1024,classnum]) B_FC2 = Bias_variable ([classnum]) Y_conv = Tf.nn.softmax (Tf.matmul (h_fc1_dr OP,W_FC2) # +B_FC2) # Loss cross_entropy =-tf.reduce_sum (Y_*tf.log (y_conv)) Train_step = Tf.train.AdamOptimizer (1e-4). MI Nimize (cross_entropy) correct_prediction = Tf.equal (Tf.argmax (y_conv,1), Tf.argmax (y_,1)) accuracy = Tf.reduce_mean ( Tf.cast (correct_prediction, "float")) predict = [Tf.reduce_max (Y_conv), Tf.argmax (y_conv,1) [0]] Saver = tf.train.Saver () Checkpoint_dir = "./tmp/train_model.cpkt" Sess = tf.

 InteractiveSession () Saver.restore (Sess,checkpoint_dir)
The simple tensorflow should be so successful with your own picture set. Have a good time ing, go on.

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.