TensorFlow Learning Tutorial------Implement Lenet and perform two categories

Source: Internet
Author: User

#Coding:utf-8ImportTensorFlow as TFImportOSdefread_and_decode (filename):#generate a queue based on file nameFilename_queue =tf.train.string_input_producer ([filename]) Reader=TF. Tfrecordreader () _, Serialized_example= Reader.read (Filename_queue)#return file names and filesfeatures =Tf.parse_single_example (serialized_example, features={                                           'label': TF. Fixedlenfeature ([], Tf.int64),'Img_raw': TF. Fixedlenfeature ([], tf.string),}) img= Tf.decode_raw (features['Img_raw'], tf.uint8) img= Tf.reshape (IMG, [227, 227, 3]) img= (Tf.cast (IMG, tf.float32) * (1./255)-0.5) * *label= Tf.cast (features['label'], Tf.int32)PrintImg,labelreturnimg, labeldefGet_batch (image, label, batch_size,crop_size):#Data Expansion TransformationDistorted_image = Tf.random_crop (image, [Crop_size, Crop_size, 3])#Random croppingDistorted_image = Tf.image.random_flip_up_down (distorted_image)#Up and down random flipDistorted_image = tf.image.random_brightness (distorted_image,max_delta=63)#Brightness ChangeDistorted_image = Tf.image.random_contrast (distorted_image,lower=0.2, upper=1.8)#Contrast Variation      #Generate Batch    #Shuffle_batch Parameters: capacity is used to define the scope of the shuttle, and if it is for the entire training data set, then capacity should be large enough to get batch    #Make sure the data hits the big enough messImages, Label_batch = Tf.train.shuffle_batch ([Distorted_image, label],batch_size=batch_size, Num_threads=1,capacity=2000,min_after_dequeue=1000)     returnimages, Label_batchclassNetwork (object):#constructor initializes the full-join layer of the convolution layer  deflenet (self,images,keep_prob):" "based on the conv2d function in TensorFlow, we first define a few basic symbolic input matrix WXW, where only the input width and height equality is considered, if not equal, the derivation method is not much explained. Filter matrix FXF, convolution core stride value S, step output width is new_height, new_width in TensorFlow defines two values for padding: VALID, same.        Each of these definitions is explained in the following sections. VALID new_height = New_width = (w–f + 1)/S #结果向上取整 same new_height = New_width = w/s #结果向上取 Whole" "Images= Tf.reshape (images,shape=[-1,28,28,3])        #images = (tf.cast (images,tf.float32)/255.0-0.5)        #the first layer, convolutional layer 39,39,3--->5,5,3,32--->39,39,32        #convolution core size is 5*5 input layer depth is 3 that is three channel image convolution core depth of 32 is the number of convolution coresConv1_weights = Tf.get_variable ("conv1_weights", [5,5,3,32],initializer = Tf.truncated_normal_initializer (stddev=0.1)) Conv1_biases= Tf.get_variable ("conv1_biases", [32],initializer = Tf.constant_initializer (0.0))        #Move Step 1 with full 0 fillCONV1 = tf.nn.conv2d (images,conv1_weights,strides=[1,1,1,1],padding='same')        #Activate function Relu to linearizationRELU1 =Tf.nn.relu (Tf.nn.bias_add (conv1,conv1_biases))#second-tier maximum pool layer 39,39,32--->1,2,2,1--->19,19,32        #Pool layer Filter size is 2*2 move step 2 with full 0 paddingPool1 = Tf.nn.max_pool (RELU1, ksize=[1,2,2,1],strides=[1,2,2,1],padding='same')                #The third layer convolutional layer 19,19,32--->5,5,32,64--->19,19,64        #convolution core size is 5*5 the depth of the current layer is 32 convolution coreConv2_weights = Tf.get_variable ("conv_weights", [5,5,32,64],initializer = Tf.truncated_normal_initializer (stddev=0.1)) Conv2_biases= Tf.get_variable ("conv2_biases", [64],initializer = Tf.constant_initializer (0.0)) Conv2= tf.nn.conv2d (pool1,conv2_weights,strides=[1,1,1,1],padding='same')#Move Step 1 with full 0 fillRELU2 =Tf.nn.relu (Tf.nn.bias_add (conv2,conv2_biases))#Fourth Floor 19,19,64--->1,2,2,1 of the largest pool layer--->9,9,64        #Pool layer Filter size is 2*2 move step 2 with full 0 paddingPool2 = Tf.nn.max_pool (relu2,ksize = [1,2,2,1],strides=[1,2,2,1],padding='same')                #Fifth level fully connected layerFc1_weights = Tf.get_variable ("fc1_weights", [7*7*64,1024],initializer = Tf.truncated_normal_initializer (stddev=0.1)) Fc1_biases= Tf.get_variable ("fc1_biases", [1024],initializer = Tf.constant_initializer (0.1))#[1,1024]Pool2_vector = Tf.reshape (pool2,[-1,7*7*64])#The eigenvector flattening the original diagram into a row of 9x9*64 columns of the vectorFC1 = Tf.nn.relu (Tf.matmul (pool2_vector,fc1_weights) +fc1_biases)#to reduce overfitting to join the dropout layerFc1_dropout=tf.nn.dropout (Fc1,keep_prob)#Sixth level fully connected layer        #number of neuron nodes is 1024 classification Node 2Fc2_weights = Tf.get_variable ("fc2_weights", [1024,2],initializer=tf.truncated_normal_initializer (stddev=0.1)) Fc2_biases= Tf.get_variable ("fc2_biases", [2],initializer = Tf.constant_initializer (0.1)) FC2= Tf.matmul (fc1_dropout,fc2_weights) +fc2_biasesreturnFC2defLenet_loss (self,fc2,y_):#seventh layer output layer        #SoftmaxY_conv =Tf.nn.softmax (FC2) Labels=tf.one_hot (y_,2)               #defining the cross-entropy loss function        #cross_entropy = Tf.reduce_mean (-tf.reduce_sum (Y_ * Tf.log (Y_CONV), reduction_indices=[1]))Loss = Tf.reduce_mean (tf.nn.softmax_cross_entropy_with_logits (logits = y_conv, labels =labels)) Self.cost=LossreturnSelf.costdefLenet_optimer (self,loss): Train_optimizer=Tf.train.GradientDescentOptimizer (LR). Minimize (loss)returnTrain_optimizer#Calculating Softmax cross-entropy loss function    defSoftmax_loss (self,predicts,labels): Predicts=Tf.nn.softmax (predicts) labels=tf.one_hot (labels,self.weights['FC2'].get_shape (). As_list () [1]) Loss= Tf.reduce_mean (tf.nn.softmax_cross_entropy_with_logits (logits = predicts, labels =labels)) Self.cost=LossreturnSelf.cost#Gradient Descent    defOptimer (self,loss,lr=0.01): Train_optimizer=Tf.train.GradientDescentOptimizer (LR). Minimize (loss)returnTrain_optimizerdeftrain (): Image,label=read_and_decode ("./train.tfrecords") Batch_image,batch_label=get_batch (image,label,batch_size=30,crop_size=28)    #set up a network for trainingx = Tf.placeholder ("float", shape=[none,28,28,3],name='X-input') Y_= Tf.placeholder ("Int32", shape=[None]) Keep_prob=Tf.placeholder (tf.float32) Net=Network ()INF =net.lenet (x,keep_prob) loss=net.lenet_loss (Inf,y_)#Calculate LossOpti=net.optimer (loss)#Gradient Descentcorrect_prediction= Tf.equal (Tf.cast (Tf.argmax (inf,1), Tf.int32), Batch_label) accuracy=Tf.reduce_mean (Tf.cast (correct_prediction,tf.float32)) Init=Tf.global_variables_initializer () with TF. Session () as Session:with Tf.device ("/gpu:0"): Session.run (init) coord=tf.train.Coordinator () Threads= Tf.train.start_queue_runners (coord=coord) Max_iter=10000ITER=0ifOs.path.exists (Os.path.join ("Model",'model.ckpt')) isTrue:tf.train.Saver (Max_to_keep=none). Restore (Session, Os.path.join ("Model",'model.ckpt'))               whileiter<Max_iter:#Loss_np,_,label_np,image_np,inf_np=session.run ([Loss,opti,batch_image,batch_label,inf])B_batch_image,b_batch_label =Session.run ([Batch_image,batch_label]) loss_np,_=session.run ([loss,opti],feed_dict={x:b_batch_image,y_:b_batch_label,keep_prob:0.6})                   ifiter%50==0:Print 'Trainloss:', loss_npifiter%500==0:#accuracy_np = Session.run ([accuracy])ACCURACY_NP = Session.run ([accuracy],feed_dict={x:b_batch_image,y_:b_batch_label,keep_prob:1.0})                    Print 'xxxxxxxxxxxxxxxxxxxxxx', ACCURACY_NP iter+=1coord.request_stop ()#queue needs to be closed or errorcoord.join (Threads)if __name__=='__main__': Train ()

TensorFlow Learning Tutorial------Implement Lenet and perform two categories

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.