TensorFlow Learning (4): Save the parameter naming mechanism for model Saver.save () and restore and create the handwriting recognition engine

Source: Internet
Author: User
Tags constant rar
Preface

In the previous chapter, we talked about how to train a network, click to view the blog, this chapter we say TensorFlow when saving the network is how to give different parameters named, and how to restore the saved parameters to the reconstructed network structure. Finally, the reconstructed network is used to predict a picture (any pixel) that contains a number (0-9).

Code main reference Github:github address body

How to view the saved parameters saved to a binary file

TensorFlow also provides the following methods to view the saved parameters and to read the saved parameters into the dictionary.

From Tensorflow.python import pywrap_tensorflow
reader2 = Pywrap_tensorflow. Newcheckpointreader ('./model2/mnistmodel2-2 ')

Dic2 = Reader2.get_variable_to_shape_map () for
I in Dic2:
    print (i, ': ', dic2[i])
print (len (dic2))

You can see the output of the above code as follows:

As you can see (I guess), if you don't give tensor a name when you define Saver, TensorFlow will give your tensor a name in your own way. Here's how to name the word:

If tensor is constant, then it is named by law: Const,const_1,const_2,const_3, ...
If tensor is a variable, then it is named by law: Variable,variable_1,variable_2,variable_3, ...

Second, how to restore the parameters to the refactoring network

As far as I know, the restore parameter should be reconstructed with the same structure as the trained network. If you can recover parameters without refactoring, please contact me about how you did it. Restore is simple enough to define a direct restore after saver (there is no training process here). The following code is the process of restore, and then you can recognize your handwriting (you can write a number with paint).

# Encoding=utf-8 Import TensorFlow as TF from PIL import image,imagefilter from tensorflow.examples.tutorials.mnist Import Input_data mnist = input_data.read_data_sets (' Mnist_data ', one_hot=true) def imageprepare (argv): # The function reads a picture and returns an array after processing.
    Into the network forecast "" "This function returns the pixel values.
    The imput is a PNG file location. "" "Im = Image.open (argv). Convert (' L ') width = float (im.size[0]) height = float (im.size[1]) NewImage = Ima Ge.new (' L ', (+), (255)) # creates white canvas of 28x28 pixels if width > height: # Check which dimension I S bigger # Width is bigger.
        Width becomes pixels. nheight = Int (Round ((20.0/width * height), 0) # resize height According to ratio width if nheight = = 0: # rar e Case but minimum is 1 pixel nheight = 1 # resize and sharpen img = im.resize ((nheigh T), Image.antialias). Filter (imagefilter.sharpen) wtop = Int (Round ((((28-nheight)/2), 0))  # Caculate Horizontal pozition Newimage.paste (IMG, (4, WTOP)) # Paste resized image on white canvas else: # Height is bigger.
        Heigth becomes pixels. nwidth = Int (Round ((20.0/height * width), 0)) # Resize width According to ratio height if (nwidth = 0): # rar e Case but minimum is 1 pixel nwidth = 1 # resize and sharpen img = Im.resize ((nwidth, 20) , Image.antialias). Filter (imagefilter.sharpen) wleft = Int (Round ((((28-nwidth)/2), 0) # Caculate Vertical Poz

    Ition Newimage.paste (IMG, (Wleft, 4)) # Paste resized image on white canvas # newimage.save ("Sample.png") TV = List (Newimage.getdata ()) # Get pixel values # normalize pixels to 0 and 1.
    0 is pure white and 1 is pure black. TVA = [(255-x) * 1.0/255.0 for x in TV] return TVA def weight_variable (shape): initial = Tf.truncated_normal ( Shape, stddev=0.1) return TF.
  Variable (initial) def bias_variable (shape):  Initial = Tf.constant (0.1, Shape=shape) return TF. Variable (initial) mygraph = TF. Graph () with Mygraph.as_default (): # Refactor the same network with Tf.name_scope (' Inputsandlabels '): X_raw = Tf.placeholder ( Tf.float32, Shape=[none, 784]) y = Tf.placeholder (Tf.float32, Shape=[none, ten]) with Tf.name_scope (' Hidden1 ') : x = Tf.reshape (X_raw, shape=[-1,28,28,1]) w_conv1 = Weight_variable ([5,5,1,32]) B_conv1 = Bias_v Ariable ([+]) L_conv1 = Tf.nn.relu (tf.nn.conv2d (x,w_conv1, strides=[1,1,1,1],padding= ' same ') + B_conv1) L_
        Pool1 = Tf.nn.max_pool (L_conv1, ksize=[1,2,2,1], strides=[1,2,2,1], padding= ' same ') with Tf.name_scope (' Hidden2 '): W_conv2 = Weight_variable ([5,5,32,64]) B_conv2 = bias_variable ([+]) L_conv2 = Tf.nn.relu (tf.nn.conv 2d (l_pool1, W_conv2, strides=[1,1,1,1], padding= ' same ') +b_conv2) l_pool2 = Tf.nn.max_pool (L_conv2, ksize=[1,2,2,1] , strides=[1,2,2,1], padding= ' same ') with TF.NAME_SCOPE (' Fc1 '): W_FC1 = weight_variable ([64*7*7, 1024x768]) B_FC1 = Bias_variable ([1024x768]) L_pool2_flat = tf. Reshape (L_pool2, [-1, 64*7*7]) L_fc1 = Tf.nn.relu (Tf.matmul (L_pool2_flat, W_FC1) + b_fc1) Keep_prob = TF.P Laceholder (tf.float32) L_fc1_drop = Tf.nn.dropout (L_FC1, Keep_prob) with Tf.name_scope (' FC2 '): W_FC2 = Weight_variable ([1024x768, ten]) B_FC2 = Bias_variable ([ten]) Y_conv = Tf.matmul (L_fc1_drop, W_FC2) + B_FC2 W ITH TF. Session (Graph=mygraph) as Sess:sess.run (Tf.global_variables_initializer ()) saver = Tf.train.Saver () saver.re Store (Sess, './model/mnistmodel-1 ') # Restore parameter array = Imageprepare ('./1.png ') # read a picture containing numbers prediction = TF.ARGM Ax (Y_conv, 1) # Forecast prediction = prediction.eval (feed_dict={x_raw:[array],keep_prob:1.0},session=sess) print (' The D Igits in this image is:%d '%prediction[0])
Summary

The recognition engine effect is still good, its core is convolutional neural network.

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.