1. Introduction to Mnist Data set
Get to TensorFlow's built-in mnist dataset first through the following two lines of code:
from Import = input_data.read_data_sets ('. /data/mnist ', one_hot=true)
The Mnist DataSet has 55000 (mnist.train.num_examples) for training data, corresponding to 55,000 tags, and a total of 10000 (mnist.test.num_examples) data for the image to be tested, The same 10,000 labels correspond to them. For easy access, the data for these images or tags is formatted.
The training data set of the Mnist dataset (Mnist.train.images) is a 55000 * 784 matrix, each row of the matrix represents the data of a picture (28 * 28 * 1), the data range of the image is [0, 1], which represents the value of the pixel grayscale normalized.
The label for the training set (Mnist.train.labels) is a 55000 * 10 matrix, with 10 numbers for each row representing the probability that the corresponding picture belongs to the number 0 to 9, with a range of 0 or 1. Only one label row is 1, indicating that the correct number of the picture is the corresponding subscript value, the remaining 0.
The test set is similar to the training set, except that the amount of data is different.
The following code shows some of the shapes and tags of the mnist training picture:
ImportNumPy as NPImportMatplotlib.pyplot as Plot fromTensorflow.examples.tutorials.mnistImportinput_datamnist= Input_data.read_data_sets ('./data/mnist', one_hot=True) Trainimages=Mnist.train.imagestrainLabels=Mnist.train.labelsplot.figure (1, Figsize= (4, 3)) forIinchRange (6): Curimage= Np.reshape (Trainimages[i,:], (28, 28)) Curlabel=Np.argmax (trainlabels[i,:]) Ax= plot.subplot (int (str) + str (i+1)) plot.imshow (curimage, CMap=plot.get_cmap ('Gray')) Plot.axis ('off') Ax.set_title (Curlabel) plot.suptitle ('MNIST') plot.show ()
The above code outputs the Mnist picture and its label:
2. Training through a single layer neural network
1 defTrain (traincycle=50000, debug=False):2Inputsize = 7843Outputsize = 104BatchSize = 645Inputs = Tf.placeholder (Tf.float32, shape=[None, inputsize])6 7 #x * w = [max, 784] * [784, ten]8weights = tf. Variable (Tf.random_normal ([784, 10], 0, 0.1))9Bias = tf. Variable (Tf.random_normal ([Outputsize], 0, 0.1))Tenoutputs =Tf.add (Tf.matmul (inputs, weights), bias) Oneoutputs =Tf.nn.softmax (outputs) A -Labels = Tf.placeholder (Tf.float32, shape=[None, outputsize]) - theLoss = Tf.reduce_mean (Tf.square (Outputs-labels)) -Optimizer = Tf.train.GradientDescentOptimizer (0.1) -Trainer =optimizer.minimize (loss) - +Sess =TF. Session () - Sess.run (Tf.global_variables_initializer ()) + forIinchRange (traincycle): ABatch =Mnist.train.next_batch (batchsize) atSess.run ([trainer, loss], feed_dict={inputs:batch[0], labels:batch[1]}) - - ifDebug andI% 1000 = =0: -corrected = Tf.equal (Tf.argmax (labels, 1), Tf.argmax (outputs, 1)) -accuracy =Tf.reduce_mean (tf.cast (corrected, tf.float32)) -Accuracyvalue = Sess.run (accuracy, feed_dict={inputs:batch[0], labels:batch[1]}) in PrintI'Train Set accuracy:', Accuracyvalue) - to #Test +corrected = Tf.equal (Tf.argmax (labels, 1), Tf.argmax (outputs, 1)) -accuracy =Tf.reduce_mean (tf.cast (corrected, tf.float32)) theAccuracyvalue = Sess.run (accuracy, feed_dict={inputs:mnist.test.images, labels:mnist.test.labels}) * Print("accuracy on test set:", Accuracyvalue) $ Panax NotoginsengSess.close ()
3. Training Results
The final output of the above model is:
As can be seen from the print log, the early convergence rate is very fast and the late start fluctuates. Finally, the correctness rate of the model in training set is about 90%, and the test set is similar. Accuracy is still relatively low, it is explained that the single-layer neural network in the processing of image data There is a great flaw, is not a good choice.
This address: https://www.cnblogs.com/laishenghao/p/9576806.html
TensorFlow Training MNIST (1)--softmax single-Layer neural network