One: Save
ImportTensorFlow as TF fromTensorflow.examples.tutorials.mnistImportInput_data#Load Data SetMnist = Input_data.read_data_sets ("Mnist_data", one_hot=True)#100 photos per batchbatch_size = 100#calculate the total number of batchesN_batch = mnist.train.num_examples//batch_size#definition of two placeholderx = Tf.placeholder (tf.float32,[none,784]) y= Tf.placeholder (tf.float32,[none,10])#Create a simple neural network, input layer 784 neurons, output layer 10 neuronsW = tf. Variable (Tf.zeros ([784,10])) b= TF. Variable (Tf.zeros ([10])) Prediction= Tf.nn.softmax (Tf.matmul (x,w) +b)#Two-time cost function#loss = Tf.reduce_mean (Tf.square (y-prediction))Loss = Tf.reduce_mean (TF.NN.SOFTMAX_CROSS_ENTROPY_WITH_LOGITS_V2 (labels=y,logits=prediction))#using the gradient descent methodTrain_step = Tf.train.GradientDescentOptimizer (0.2). Minimize (loss)#Initialize Variablesinit =Tf.global_variables_initializer ()#results are stored in a Boolean listCorrect_prediction = Tf.equal (Tf.argmax (y,1), Tf.argmax (prediction,1))#Argmax Returns the position of the largest value in a one-dimensional tensor#Accuracy Rateaccuracy =Tf.reduce_mean (Tf.cast (correct_prediction,tf.float32)) Saver=Tf.train.Saver () with TF. Session () as Sess:sess.run (init) forEpochinchRange (11): forBatchinchRange (N_batch): Batch_xs,batch_ys=Mnist.train.next_batch (batch_size) sess.run (train_step,feed_dict={X:batch_xs,y:batch_ys}) ACC= Sess.run (accuracy,feed_dict={x:mnist.test.images,y:mnist.test.labels})Print("Iter"+ STR (EPOCH) +", testing accuracy"+STR (ACC))#Save the ModelSaver.save (Sess,'net/my_net.ckpt')
Results:
Iter 0,testing accuracy 0.82521,testing accuracy 0.89162,testing accuracy 0.90083,testing Accuracy 0.9064,testing accuracy 0.90915,testing accuracy 0.91046,testing accuracy 0.911 7,testing accuracy 0.91278,testing accuracy 0.91459,testing accuracy 0.916610, Testing accuracy 0.9177
II: Loading
ImportTensorFlow as TF fromTensorflow.examples.tutorials.mnistImportInput_data#Load Data SetMnist = Input_data.read_data_sets ("Mnist_data", one_hot=True)#100 photos per batchbatch_size = 100#calculate the total number of batchesN_batch = mnist.train.num_examples//batch_size#definition of two placeholderx = Tf.placeholder (tf.float32,[none,784]) y= Tf.placeholder (tf.float32,[none,10])#Create a simple neural network, input layer 784 neurons, output layer 10 neuronsW = tf. Variable (Tf.zeros ([784,10])) b= TF. Variable (Tf.zeros ([10])) Prediction= Tf.nn.softmax (Tf.matmul (x,w) +b)#Two-time cost function#loss = Tf.reduce_mean (Tf.square (y-prediction))Loss = Tf.reduce_mean (TF.NN.SOFTMAX_CROSS_ENTROPY_WITH_LOGITS_V2 (labels=y,logits=prediction))#using the gradient descent methodTrain_step = Tf.train.GradientDescentOptimizer (0.2). Minimize (loss)#Initialize Variablesinit =Tf.global_variables_initializer ()#results are stored in a Boolean listCorrect_prediction = Tf.equal (Tf.argmax (y,1), Tf.argmax (prediction,1))#Argmax Returns the position of the largest value in a one-dimensional tensor#Accuracy Rateaccuracy =Tf.reduce_mean (Tf.cast (correct_prediction,tf.float32)) Saver=Tf.train.Saver () with TF. Session () as Sess:sess.run (init)#recognition rate when model not loaded Print('recognition rate not loaded', Sess.run (accuracy,feed_dict={x:mnist.test.images,y:mnist.test.labels})) Saver.restore (Sess,'net/my_net.ckpt') #recognition rate after loading the model Print('recognition rate after onboarding', Sess.run (Accuracy,feed_dict={x:mnist.test.images,y:mnist.test.labels}))
Results:
Not loaded recognition rate 0.098 from net/0.9177
TensorFlow (13): Model Saving and loading