TensorFlow Saving network parameters using well-trained network parameters to predict the data

Source: Internet
Author: User
Tags wrapper wrappers

After training a good network, it is important to retrain and predict later. So this article is mainly about if the storage of good parameters and the use of well-trained parameters.
The main APIs used
Https://www.tensorflow.org/api_docs/python/tf/train/Saver

The following example illustrates that the network is to construct a convolutional neural network for handwritten numerals to identify.

https://github.com/xgli/TensorFlow-Examples/blob/master/examples/3_NeuralNetworks/convolutional_network.py
1. Storage Network Parameters

1. Declaring the Saver API
2. Save the Model

"A convolutional Network Implementation example using TensorFlow Library. 
This example is using the MNIST database of handwritten digits (http://yann.lecun.com/exdb/mnist/) Author:aymeric Damien Project:https://github.com/aymericdamien/tensorflow-examples/' from __future__ import print_function import tensor Flow as TF # import MNIST data from tensorflow.examples.tutorials.mnist Import input_data MNIST = Input_data.read_data_se TS ("/tmp/data/", one_hot=true) # Parameters learning_rate = 0.001 training_iters = 128*10*3 Batch_size = display_ste  p = Ten # Network Parameters n_input = 784 # MNIST Data input (img shape:28*28) n_classes = ten # MNIST total classes (0-9 digits) dropout = 0.75 # dropout, probability to keep units # tf Graph input x = Tf.placeholder (Tf.float32, [None, N_INP 


UT]) y = Tf.placeholder (Tf.float32, [None, n_classes]) Keep_prob = Tf.placeholder (tf.float32) #dropout (keep probability) # Create Some wrappers for simplicity Def conv2d (x, W, B, Strides=1): # conv2d wrapper, with bias and relu activation x = tf.nn.conv2d (x, W, strides=[1, strides, strides, 1], Paddin G= ' same ') x = Tf.nn.bias_add (x, B) return Tf.nn.relu (x) def maxpool2d (x, k=2): # maxpool2d wrapper retur n Tf.nn.max_pool (x, ksize=[1, K, K, 1], strides=[1, K, K, 1], padding= ' same ') # Create Model D EF conv_net (x, Weights, biases, dropout): # reshape input picture x = Tf.reshape (x, Shape=[-1, 28, 28, 1]) # Convolution Layer conv1 = conv2d (x, weights[' WC1 '], biases[' BC1 ']) # Max Pooling (down-sampling) conv1 = Maxpo Ol2d (CONV1, k=2) # convolution Layer conv2 = conv2d (conv1, weights[' WC2 '], biases[' BC2 ']) # Max Pooling (down -sampling) Conv2 = maxpool2d (Conv2, k=2) # Fully connected Layer # Reshape Conv2 output to fit Fully connecte D Layer Input FC1 = Tf.reshape (Conv2, [-1, weights[' Wd1 '].get_shape (). As_list () [0]]) FC1 = Tf.add (Tf.matmul (FC1, W eights[' Wd1 '), biases['Bd1 ']) FC1 = Tf.nn.relu (FC1) # Apply Dropout fc1 = Tf.nn.dropout (FC1, dropout) # Output, class prediction out = Tf.add (Tf.matmul (FC1, weights["out"), biases[' out ']) return out # Store layers weight & bias weights = {# 5x5 conv, 1 input, outputs ' WC1 ': TF. Variable (Tf.random_normal ([5, 5, 1, +])), # 5x5 conv, inputs, outputs ' WC2 ': TF. Variable (Tf.random_normal ([5, 5, +]), # fully connected, 7*7*64 inputs, 1024x768 outputs ' wd1 ': TF. Variable (Tf.random_normal ([7*7*64, 1024x768])), # 1024x768 inputs, outputs (class prediction) ' Out ': TF. Variable (Tf.random_normal ([1024x768, n_classes])} biases = {' BC1 ': TF. Variable (Tf.random_normal ([+])), ' BC2 ': TF. Variable (Tf.random_normal ([+])), ' Bd1 ': TF. Variable (Tf.random_normal ([1024x768])), ' Out ': TF.  Variable (Tf.random_normal ([n_classes])} # Construct Model pred = conv_net (x, Weights, Biases, Keep_prob) # Define loss and optimizer cost = Tf.reduce_mean (Tf.nn.softmaX_cross_entropy_with_logits (pred, y)) optimizer = Tf.train.AdamOptimizer (learning_rate=learning_rate). Minimize ( Cost) # Evaluate Model correct_pred = tf.equal (Tf.argmax (pred, 1), Tf.argmax (Y, 1)) accuracy = Tf.reduce_mean (Tf.cast (cor  Rect_pred, Tf.float32) # Initializing the variables init = tf.global_variables_initializer () saver = Tf.train.Saver () # #保存的API # Launch the graph with TF. Session () as Sess:sess.run (init) step = 1 # Keep training until reach Max iterations while step * batch_s ize < training_iters:batch_x, batch_y = Mnist.train.next_batch (batch_size) # Run optimization op (back Prop) Sess.run (optimizer, feed_dict={x:batch_x, Y:batch_y, Keep_prob:dro Pout}) If step% Display_step = = 0: # Calculate batch loss and accuracy loss, ACC = SESS.R
   Un ([Cost, accuracy], feed_dict={x:batch_x, y:batch_y,                                                           Keep_prob:1.}) Print ("Iter" + str (step*batch_size + ", Minibatch loss=" + \ "{:. 6f}". Format (Loss) + ", Training accuracy=" + \ "{:. 5
    F} ". Format (ACC)) Step + = 1 print (" Optimization finished! ") Print ("Save Model") Save_path = Saver.save (Sess, "./model") #保存模型 print ("Save model:{0} finished". Format (Save_path ) # Calculate accuracy for mnist test images print ("Testing accuracy:", \ sess.run (accuracy, Feed_di
                                      CT={X:MNIST.TEST.IMAGES[:256], y:mnist.test.labels[:256],
 Keep_prob:1.}))
2, using the stored network parameters
"A convolutional Network Implementation example using TensorFlow Library. 
This example is using the MNIST database of handwritten digits (http://yann.lecun.com/exdb/mnist/) Author:aymeric Damien Project:https://github.com/aymericdamien/tensorflow-examples/' from __future__ import print_function import tensor Flow as TF # import MNIST data from tensorflow.examples.tutorials.mnist Import input_data MNIST = Input_data.read_data_se TS ("/tmp/data/", one_hot=true) # Parameters learning_rate = 0.001 training_iters = 128*10*3 Batch_size = display_ste  p = Ten # Network Parameters n_input = 784 # MNIST Data input (img shape:28*28) n_classes = ten # MNIST total classes (0-9 digits) dropout = 0.75 # dropout, probability to keep units # tf Graph input x = Tf.placeholder (Tf.float32, [None, N_INP 


UT]) y = Tf.placeholder (Tf.float32, [None, n_classes]) Keep_prob = Tf.placeholder (tf.float32) #dropout (keep probability) # Create Some wrappers for simplicity Def conv2d (x, W, B, Strides=1): # conv2d wrapper, with bias and relu activation x = tf.nn.conv2d (x, W, strides=[1, strides, strides, 1], Paddin G= ' same ') x = Tf.nn.bias_add (x, B) return Tf.nn.relu (x) def maxpool2d (x, k=2): # maxpool2d wrapper retur n Tf.nn.max_pool (x, ksize=[1, K, K, 1], strides=[1, K, K, 1], padding= ' same ') # Create Model D EF conv_net (x, Weights, biases, dropout): # reshape input picture x = Tf.reshape (x, Shape=[-1, 28, 28, 1]) # Convolution Layer conv1 = conv2d (x, weights[' WC1 '], biases[' BC1 ']) # Max Pooling (down-sampling) conv1 = Maxpo Ol2d (CONV1, k=2) # convolution Layer conv2 = conv2d (conv1, weights[' WC2 '], biases[' BC2 ']) # Max Pooling (down -sampling) Conv2 = maxpool2d (Conv2, k=2) # Fully connected Layer # Reshape Conv2 output to fit Fully connecte D Layer Input FC1 = Tf.reshape (Conv2, [-1, weights[' Wd1 '].get_shape (). As_list () [0]]) FC1 = Tf.add (Tf.matmul (FC1, W eights[' Wd1 '), biases['Bd1 ']) FC1 = Tf.nn.relu (FC1) # Apply Dropout fc1 = Tf.nn.dropout (FC1, dropout) # Output, class prediction out = Tf.add (Tf.matmul (FC1, weights["out"), biases[' out ']) return out # Store layers weight & bias weights = {# 5x5 conv, 1 input, outputs ' WC1 ': TF. Variable (Tf.random_normal ([5, 5, 1, +])), # 5x5 conv, inputs, outputs ' WC2 ': TF. Variable (Tf.random_normal ([5, 5, +]), # fully connected, 7*7*64 inputs, 1024x768 outputs ' wd1 ': TF. Variable (Tf.random_normal ([7*7*64, 1024x768])), # 1024x768 inputs, outputs (class prediction) ' Out ': TF. Variable (Tf.random_normal ([1024x768, n_classes])} biases = {' BC1 ': TF. Variable (Tf.random_normal ([+])), ' BC2 ': TF. Variable (Tf.random_normal ([+])), ' Bd1 ': TF. Variable (Tf.random_normal ([1024x768])), ' Out ': TF.  Variable (Tf.random_normal ([n_classes])} # Construct Model pred = conv_net (x, Weights, Biases, Keep_prob) # Define loss and optimizer cost = Tf.reduce_mean (Tf.nn.softmaX_cross_entropy_with_logits (pred, y)) optimizer = Tf.train.AdamOptimizer (learning_rate=learning_rate). Minimize ( Cost) # Evaluate Model correct_pred = tf.equal (Tf.argmax (pred, 1), Tf.argmax (Y, 1)) accuracy = Tf.reduce_mean (Tf.cast (cor  Rect_pred, Tf.float32) # Initializing the variables init = tf.global_variables_initializer () saver = Tf.train.Saver () # #保存的API # Launch the graph with TF. Session () as Sess: #sess. Run (init) #不使用训练好的参数 Load_path = Saver.restore (Sess, "./model") #load上一步训练的参数 step =  1 # Keep training until reach Max iterations while step * batch_size < training_iters:batch_x, batch_y = Mnist.train.next_batch (batch_size) # Run optimization op (backprop) Sess.run (Optimizer, feed_dict={x:b
            Atch_x, Y:batch_y, keep_prob:dropout}) if step% Display_step = = 0:
    # Calculate batch loss and accuracy loss, ACC = Sess.run ([cost, accuracy], feed_dict={x:batch_x,                                                          Y:batch_y, Keep_prob:1.}) Print ("Iter" + str (step*batch_size) + ", Minibatch loss=" + \ "{ :. 6f} ". Format (loss) +", Training accuracy= "+ \" {:. 5f} ". Format (ACC)) Step + = 1 print (" Opt
    Imization finished! ") Print ("Save Model") Save_path = Saver.save (Sess, "./model") #保存模型 print ("Save model:{0} finished". Format (Save_path ) # Calculate accuracy for mnist test images print ("Testing accuracy:", \ sess.run (accuracy, Feed_di
                                      CT={X:MNIST.TEST.IMAGES[:256], y:mnist.test.labels[:256], Keep_prob:1.}))

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.