Learning notes TF024: TensorFlow achieves Softmax Regression (Regression) Recognition of handwritten numbers

Source: Internet
Author: User

Learning notes TF024: TensorFlow achieves Softmax Regression (Regression) Recognition of handwritten numbers
TensorFlow implements Softmax Regression (Regression) to recognize handwritten numbers. MNIST (Mixed National Institute of Standards and Technology database), simple machine vision dataset, 28x28 pixels handwritten number, only grayscale value information, blank part is 0, handwriting according to the color depth of [0, 1], 784 dimension. Two-dimensional space information is discarded, and the target is divided into 0 ~ 9. There are 10 categories in total. Data Loading, data. read_data_sets, 55000 samples, 10000 samples in the test set, and 5000 samples in the verification set. Sample annotation information, label, 10-dimensional vector, 10 types of one-hot encoding. Training set training model, validation set test effect, test set evaluation model (accuracy, recall rate, F1-score ).


Algorithm Design, Softmax Regression Training, Handwritten Digit Recognition classification model, estimation of category probability, and maximum probability number for model output. Class Feature addition to determine Class probability. Adjust weights for model learning and training. Softmax, exp function for various feature computation, standardized (the probability of output of all categories is 1 ). Y = softmax (Wx + B ).


NumPy uses C and fortran to call the openblas and mkl matrix calculation libraries. TensorFlow intensive complex operations are executed outside Python. Define a computing graph. You do not need to send the computed data back to Python every time during an operation, and all the data is run outside Python.


Import tensor flow as tf to load the TensorFlow library. Less = tf. InteractiveSession (), creates an InteractiveSession, and registers as the default session. Data and operations of different sessions are independent of each other. X = tf. placeholder (tf. float32, [None, 784]). Create a Placeholder to receive input data. The first parameter indicates the data type, and the second parameter indicates the tensor shape data size. None is an unlimited number of input entries. Each input is a 784-dimension vector.


Tensor stores data and disappears once used. Variable is persistent in Model Training iterations and has a long-term presence. Each iteration is updated. The Variable objects weights and biases of the Softmax Regression model are initialized to 0. Suitable values for automatic model training. The initialization method is important for complex networks. W = tf. Variable (tf. zeros ([784, 10]), 784 feature dimension, 10 categories. Label, one-hot encoded 10-dimensional vector.


Softmax Regression algorithm, y = tf. nn. softmax (tf. matmul (x, W) + B ). Tf. nn contains a large number of neural network components. Tf. matmul, matrix multiplication function. TensorFlow automatically implements the forward and backward content. As long as the loss is defined, the training automatically calculates the gradient descent and completes automatic learning of Softmax Regression model parameters.


Define the loss function to describe the classification accuracy of the problem model. The smaller the Loss, the smaller the model classification result and the more accurate the actual value. When all the initial parameters of the model are zero, the initial loss is generated. The training objective is to reduce loss and find the global or local optimal solution. Cross-entropy, commonly used loss functions for classification issues. Y: The predicted probability distribution. y: the Label one-hot code is used to determine the prediction accuracy of the model on the actual probability distribution. Cross_entropy = tf. reduce_mean (-tf. reduce_sum (y _ * tf. log (y), reduction_indices = [1]). Define placeholder and enter the actual label. Tf. cece_sum sum, and tf. reduce_mean calculate the mean value for each batch data result.


Define the optimization algorithm and Stochastic Gradient Descent ). Based on the computing graph, the system automatically calculates and TRAINS data based on the Back Propagation algorithm. The loss is reduced when parameters are updated in each iteration. The package optimizer is provided to iterate the feed data every round. TensorFlow automatically supplements the Operation (Operation) in the background to implement reverse propagation and gradient descent. Train_step = tf. train. GradientDescentOptimizer (0.5). minimize (cross_entropy ). Call tf. train. GradientDescentOptimizer, set the learning speed to 0.5, set the optimization target cross-entropy, and obtain the training operation train_step.


Tf. global_variables_initializer (). run (). TensorFlow global parameter initializer tf. golbal_variables_initializer.


Batch_xs, batch_ys = mnist. train. next_batch (100 ). Training Operation train_step. 100 samples are randomly extracted from the training set each time to form a mini-batch. feed is sent to placeholder and train_step is called for training samples. Using a small part of the sample for training, the random gradient decreases, and the convergence speed is faster. All samples are trained each time, resulting in a large amount of computing, making it difficult to jump out of the local optimum.


Correct_prediction = tf. equal (tf. argmax (y, 1), tf. argmzx (y _, 1) to verify the accuracy of the model. Tf. argmax searches for the maximum number from tensor, tf. argmax (y, 1) calculates the maximum probability of the predicted number, and tf. argmax (y _, 1) finds the real number category of the sample. Tf. equal checks whether the predicted numeric category is correct and returns whether the calculation classification operation is correct.


Accuracy = tf. performance_mean (tf. cast (correct_prediction, tf. float32), and count the prediction degree of all samples. Tf. cast converts the correct_prediction output value type.


Print (accuracy. eval ({x: mnist. test. images, y _: mnist. test. labels })). Test data features, Label input evaluation process, calculation of Model Test Set accuracy. Softmax Regression MNIST data classification and identification. The average accuracy of the test set is about 92%.


Steps for implementing simple machine algorithms in TensorFlow:
1? Define the algorithm formula and calculate it using a neural network forward.
2? Define loss, select the optimizer, and specify the optimizer to optimize loss.
3? Iterative training data.
4? Test Set and verification set evaluation accuracy.


Defining formulas is only Computation Graph. Computation is executed only when the run method, feed data, and Computation are called.


 
 
  1. from tensorflow.examples.tutorials.mnist import input_data
  2. mnist = input_data.read_data_sets("MNIST_data/", one_hot=True)
  3. print(mnist.train.images.shape, mnist.train.labels.shape)
  4. print(mnist.test.images.shape, mnist.test.labels.shape)
  5. print(mnist.validation.images.shape, mnist.validation.labels.shape)
  6. import tensorflow as tf
  7. sess = tf.InteractiveSession()
  8. x = tf.placeholder(tf.float32, [None, 784])
  9. W = tf.Variable(tf.zeros([784, 10]))
  10. b = tf.Variable(tf.zeros([10]))
  11. y = tf.nn.softmax(tf.matmul(x, W) + b)
  12. y_ = tf.placeholder(tf.float32, [None, 10])
  13. cross_entropy = tf.reduce_mean(-tf.reduce_sum(y_ * tf.log(y), reduction_indices=[1]))
  14. train_step = tf.train.GradientDescentOptimizer(0.5).minimize(cross_entropy)
  15. tf.global_variables_initializer().run()
  16. for i in range(1000):
  17. batch_xs, batch_ys = mnist.train.next_batch(100)
  18. train_step.run({x: batch_xs, y_: batch_ys})
  19. correct_prediction = tf.equal(tf.argmax(y, 1), tf.argmax(y_, 1))
  20. accuracy = tf.reduce_mean(tf.cast(correct_prediction, tf.float32))
  21. print(accuracy.eval({x: mnist.test.images, y_: mnist.test.labels}))


References:
TensorFlow practice


Welcome to paid consultation (150 RMB per hour), My: qingxingfengzi

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.