TensorFlow implements reverse propagation.
One advantage of TensorFlow is that it can maintain the operational status and automatically update model variables based on reverse propagation.
TensorFlow updates the variable by calculating the graph and minimizes the loss function to reverse spread the error. This step is implemented by declaring the optimization function. Once the optimization function is declared, TensorFlow uses it to solve reverse propagation items in all Calculation charts. When we pass in the data to minimize the loss function, TensorFlow will adjust the variable according to the status in the computing diagram.
In the regression algorithm example, A random number is sampled from A normal distribution with the mean value 1 and standard deviation 0.1, and then multiplied by variable A. The loss function is the L2 regular loss function. Theoretically, the optimal value of A is 10 because the average value of the generated sample data is 1.
The second example is a simple binary classification algorithm. Generates 100 numbers from two normal distributions (N (-) and N. All data generated from the normal distribution N (-) is marked as the target Class 0; the data generated from the normal distribution N (3, 1) is marked as the target Class 1, the model algorithm uses the sigmoid function to convert the generated data to the target data class. In other words, the model algorithm is sigmoid (x + A), where A is the variable to be fitted, theoretically A =-1. Assume that the mean values of the two normal distributions are m1 and m2 respectively. When the value of A is reached, they are converted to 0 equals values through-(m1 + m2)/2. We will see in TensorFlow how to get the corresponding value.
At the same time, specifying an appropriate learning rate is helpful for the convergence of machine learning algorithms. The optimizer type also needs to be specified. The previous two examples use the standard gradient descent method, and its implementation in TensorFlow is the GradientDescentOptimizer () function.
# Reverse propagation # ---------------------------------- # The following Python functions mainly demonstrate reverse propagation of regression and classification models import matplotlib. pyplot as pltimport numpy as npimport tensorflow as tffrom tensorflow. python. framework import opsops. reset_default_graph () # create a computing graph session sess = tf. session () # regression algorithm example: # We will create sample data as follows: # x-data: 100 random samples from a normal ~ N (1, 0.1) # target: 100 values of the value 10. # We will fit the model: # x-data * A = target # Theoretically, A = 10. # generate data and create placeholders and variables Ax_vals = np. random. normal (1, 0.1, 100) y_vals = np. repeat (10 ., 100) x_data = tf. placeholder (shape = [1], dtype = tf. float32) y_target = tf. placeholder (shape = [1], dtype = tf. float32) # Create variable (one model parameter = A) A = tf. variable (tf. random_normal (shape = [1]) # Add the multiplication operation my_output = tf. multiply (x_data, A) # Add the L2 Regular Expression loss function loss = tf. square (my_output-y_target) # initialize the variable init = tf before running the optimizer. global_variables_initializer () sess. run (init) # declare variable optimizer my_opt = tf. train. gradientDescentOptimizer (0.02) train_step = my_opt.minimize (loss) # Training Algorithm for I in range (100): rand_index = np. random. choice (100) rand_x = [x_vals [rand_index] rand_y = [y_vals [rand_index] sess. run (train_step, feed_dict = {x_data: rand_x, y_target: rand_y}) if (I + 1) % 25 = 0: print ('step # '+ str (I + 1) + 'a =' + str (sess. run (A) print ('oss = '+ str (sess. run (loss, feed_dict = {x_data: rand_x, y_target: rand_y}) # classification algorithm example # We will create sample data as follows: # x-data: sample 50 random values from a normal = N (-1, 1) # + sample 50 random values from a normal = N (1, 1) # target: 50 values of 0 + 50 values of 1. # These are essential 100 values of the corresponding output index # We will fit the binary classification model: # If sigmoid (x + A) <0.5-> 0 else 1 # Theoretically, A shoshould be-(mean1 + mean2)/2 # reset the computing graph ops. reset_default_graph () # Create graphsess = tf. session () # generate data x_vals = np. concatenate (np. random. normal (-1, 1, 50), np. random. normal (3, 1, 50) y_vals = np. concatenate (np. repeat (0 ., 50), np. repeat (1 ., 50) x_data = tf. placeholder (shape = [1], dtype = tf. float32) y_target = tf. placeholder (shape = [1], dtype = tf. float32) # deviation variable A (one model parameter = A) A = tf. variable (tf. random_normal (mean = 10, shape = [1]) # Add A conversion operation # Want to create the operstion sigmoid (x + A) # Note, the sigmoid () part is in the loss functionmy_output = tf. add (x_data, A) # expected to add A dimension for batch data due to the specified loss function # Use the expand_dims () function to add the dimension my_output_expanded = tf. expand_dims (my_output, 0) y_target_expanded = tf. expand_dims (y_target, 0) # initialization variable Ainit = tf. global_variables_initializer () sess. run (init) # declare the cross entropy (cross entropy) of the loss function xentropy = tf. nn. sigmoid_cross_entropy_with_logits (logits = my_output_expanded, labels = y_target_expanded) # Add an optimizer function to show TensorFlow how to update and offset variable my_opt = tf. train. gradientDescentOptimizer (0.05) train_step = my_opt.minimize (xentropy) # Iteration for I in range (1400): rand_index = np. random. choice (100) rand_x = [x_vals [rand_index] rand_y = [y_vals [rand_index] sess. run (train_step, feed_dict = {x_data: rand_x, y_target: rand_y}) if (I + 1) % 200 = 0: print ('step # '+ str (I + 1) + 'a =' + str (sess. run (A) print ('oss = '+ str (sess. run (xentropy, feed_dict = {x_data: rand_x, y_target: rand_y}) # evaluate prediction predictions = [] for I in range (len (x_vals )): x_val = [x_vals [I] prediction = sess. run (tf. round (tf. sigmoid (my_output), feed_dict = {x_data: x_val}) predictions. append (prediction [0]) accuracy = sum (x = y for x, y in zip (predictions, y_vals)/100. print ('final precision = '+ str (np. round (accuracy, 2 )))
Output:
Step #25 A = [6.12853956] Loss = [16.45088196] Step #50 A = [8.55680943] Loss = [2.18415046] Step #75 A = [9.50547695] Loss = [5.29813051] Step #100 A = [9.89214897] Loss = [0.34628963] Step #200 A = [3.84576249] Loss = [[0.00083012] Step #400 A = [0.42345378] Loss = [[0.01165466] Step #600 A = [-0.35141727] Loss = [[0.05375391] Step #800 A = [-0.74206048] Loss = [[0.05468176] Step #1000 A = [-0.89036471] Loss = [[0.19636908] Step #1200 A = [-0.90850282] Loss = [[0.00608062] Step #1400 A = [-1.09374011] Loss = [[0.11037558] final precision = 1.0
The above is all the content of this article. I hope it will be helpful for your learning and support for helping customers.