Today I would like to share with you the use of gradient descent to solve linear regression problems, using the framework is TensorFlow, the development environment in the Linux Ubuntu
Which needs to use the Python library has numpy and matplotlib, we are not clear about these two libraries can be directly Google or Baidu a bit.
First we use the normal distribution function of numpy to randomly generate 100 points, these (x, y) corresponding linear equations are y=0.1*x+0.2,weigth=0.1,bias=0.2; then we use PY to generate 100 real data
# build Data
points_num =
vectors = []
# Generate 100 points with a normal random distribution function of Numpy
# These points (x, y) coordinate values correspond to linear equations y = 0.1 * x + 0.2< c4/># weight (Weight) is 0.1, deviation (Bias) is 0.2 for
i in Xrange (points_num):
x1 = np.random.normal (0.0, 0.66)
y1 = 0.1 * x 1 + 0.2 + np.random.normal (0.0, 0.04)
vectors.append ([x1, y1])
x_data = [v[0] for V in vectors] # The x-coordinate
of the real point Y_data = [v[1] for V in vectors] # The y-coordinate of the true point
After generating 100 random points, we need to use the Matplotlib library to draw icons for data presentation
# image 1: Show 100 random data points
Plt.plot (x_data, Y_data, ' r* ', label= "Original data") # Red Star-shaped dot
plt.title ("Linear Regressio N using Gradient descent ")
plt.legend () #将oraginal data label show
Plt.show ()
Next, we need to use the TensorFlow framework to build our linear regression model.
# Build a linear regression model
W = tf. Variable (Tf.random_uniform ([1],-1.0, 1.0)) # initialize Weight
b = tf. Variable (Tf.zeros ([1]) # initialize Bias
y = W * x_data + b # model calculated Y
People who have learned a bit of deep learning know that having a concept is very important, loss function. Basically all models are trained to loss the minimum, so we need to find loss function and then optimize our loss function to fit the optimal line.
# defines the loss function (loss function) or the cost function (price functions)
# for all dimension calculations of Tensor ((y-y_data) ^ 2) and/N
loss = Tf.reduce_mean (t F.square (Y-y_data)
# optimize our loss functioin
optimizer = Tf.train.GradientDescentOptimizer (0.5) with gradient descent optimizer Set the learning rate to 0.5
train = optimizer.minimize (loss)
Deep learning models need to continue to use data training in order to get a better model, and because this model is relatively simple, so training 20 times is almost.
# Create session
Sess = tf. Session ()
# Initialize all variables in the flow graph
init = Tf.global_variables_initializer ()
sess.run (init)
# Training 20 Step
for Step in xrange:
# Optimize each step
Sess.run (train)
# Print out the loss of each step, weights and deviations
print ("step=%d, loss=%f, [weight=%f BIAS=%F] \
% (step, sess.run (loss), Sess.run (W), Sess.run (b))
After the model training is complete, we can use the matplotlib to show the training model and see the effect of our training, the code is as follows:
# Create session
Sess = tf. Session ()
# Initialize all variables in the streaming diagram
init = Tf.global_variables_initializer ()
sess.run (init)
# Training 20 steps
for step in xrange:
# Optimize each step
Sess.run (train)
# Print out the loss of each step, weights and deviations
print ("step=%d, loss=%f, [weight=%f bias=%f] ") \
% (step, sess.run (loss), Sess.run (W), Sess.run (b))
Note: The following is all the code, overall, this demo is relatively simple, interested in the small partners can run my demo OH
-*-coding:utf-8-*-"Using gradient descent optimization method to solve linear regression problem quickly" "Import NumPy as NP import Matplotlib.pyplot as PLT import TensorFlow As TF # build Data points_num = vectors = [] # Generate 100 points with a normal random distribution function of Numpy # These points (x, y) coordinate values correspond to linear equations y = 0.1 * x + 0.2 # weights (Wei Ght) is 0.1, deviation (Bias) is 0.2 for I in Xrange (points_num): x1 = np.random.normal (0.0, 0.66) y1 = 0.1 * x1 + 0.2 + Np.ran Dom.normal (0.0, 0.04) vectors.append ([x1, y1]) X_data = [v[0] for V in vectors] # real point x coordinate y_data = [v[1] for V in Vectors] # The true point of the y-coordinate # image 1: Show 100 random data Points plt.plot (X_data, Y_data, ' r* ', label= "Original data") # Red Star-shaped dot Plt.title ("Lin Ear Regression using Gradient descent ") Plt.legend () plt.show () # Build a linear regression model W = tf. Variable (Tf.random_uniform ([1],-1.0, 1.0)) # initialize Weight b = tf. Variable (Tf.zeros ([1]) # initialize Bias y = W * x_data + B # model calculated y # fixed The loss function (loss functions) or cost function (price functions) # is calculated for all dimensions of Tensor ((y-y_data) ^ 2) and/N loss = Tf.reduce_mean (Tf.square (y -Y_data) # optimize our loss with gradient descent optimizer functioin optimizer = Tf.train.GradientDescentOptimizer (0.5) # Set the learning rate to 0.5 train = Optimi Zer.minimize (loss) # Create session Sess = TF.
Session () # Initialize all variables in the flow graph init = Tf.global_variables_initializer () sess.run (init) # Training 20 step for step in Xrange (20): # Optimize each step Sess.run (train) # Print out losses per step, weights and deviations print ("step=%d, loss=%f, [weight=%f bias=%f]") \% (step , Sess.run (loss), Sess.run (W), Sess.run (b)) # image 2: Draw all the points and draw the best fit line Plt.plot (x_data, Y_data, ' r* ', label= "Original dat A ") # Red Star dot Plt.title (" Linear Regression using Gradient descent ") Plt.plot (X_data, Sess.run (W) * x_data + sess.run (b), LA
Bel= "fitted line") # fitted lines plt.legend () Plt.xlabel (' x ') Plt.ylabel (' Y ') plt.show () # Close Session sess.close ()