Learning notes TF055: TensorFlow neural network provides a simple one-dimensional quadratic function. tf055tensorflow

Source: Internet
Author: User

Learning notes TF055: TensorFlow neural network provides a simple one-dimensional quadratic function. tf055tensorflow

TensorFlow running mode. Load data, define hyperparameters, build networks, train models, evaluate models, and predict.

Construct raw data that satisfies the quadratic function y = ax ^ 2 + B, and construct the simplest neural network, including the input layer, hidden layer, and output layer. TensorFlow learns the hidden layer, the output layer weights, and biases. Observe that the number of training increases and the loss value changes.

Generate and load data. Equation y = x ^ 2-0.5. Construct x and y that satisfy the equation. Add a noise point that does not meet the equation.

Import tensor flow as tf
Import bumpy as np
# Construct a function for a fully-contained quadratic equation
X_data = np. linspace (-300,) [:, np. newaxis] # Build up 300 points distributed in the-1 to 1 range, use np to generate an Equality Series, and convert the one-dimensional array of 300 points to a two-dimensional array of X 1
Noise = np. random. normal (0, 0.05, x_data.shape) # Add a noise point, which is consistent with the x_data dimension. Fit the mean value 0 and the variance 0.05 normal distribution.
Y_data = np. square (x_data)-0.5 + noise # y = x ^ 2-0.5 + noise

Define placeholders x and y as input neural network variables.

Xs = tf. placeholder (tf. float32, [None, 1])
Ys = tf. placeholder (tf. float32, [None, 1])

Build a network model.

Build a hidden layer and an output layer. The input parameters include four variables: input data, input data, output data, and activation function. Each layer of vectorized processing (y = weights * x + biases) activates function nonlinear processing and outputs data. Define the hidden layer and output layer:

Def add_layer (inputs, in_size, out_size, activation_function = None ):
# Build a matrix of Weight: in_size * out_size
Weights = tf. Variable (tf. random_normal ([in_size, out_size])
# Constructing Bias: 1 * out_size Matrix
Biases = tf. Variable (tf. zeros ([1, out_size]) + 0.1)
# Matrix Multiplication
Wx_plus_ B = tf. matmul (inputs, weights) + biases
If activation_function is None:
Outputs = Wx_plus_ B
Else:
Outputs = activation_function (Wx_plus_ B)
Return outputs # Get output data
# Build a hidden layer. Assume that the hidden layer has 10 neurons.
H1 = add_layer (xs, 1, 20, activation_function = tf. nn. relu)
# Build the output layer. Assume that the output layer is the same as the input layer, and there is one neuron.
Prediction = add_layer (h1, 20, 1, activation_function = None)

Construct a loss function to calculate the error between the predicted values and actual values of the output layer. The difference between the two is the sum of the square and then the average. The gradient descent method minimizes the loss by 0.1 efficiency.

Loss = tf. performance_mean (tf. performance_sum (tf. square (ys-prediction), reduction_indices = [1])
Train_step = tf. train. GradientDescentOptimizer (0.1). minimize (loss)

Train the model. Training 1000 times, output training loss value every 50 times.

Init = tf. global_variables_initializer () # Initialize all variables
Sess = tf. Session ()
Sess. run (init)

For I in range (1000): # training 1000 times
Sess. run (train_step, feed_dict = (xs: x_data, ys: y_data)
If I % 50 = 0: # print a loss value every 50 times
Print (sets. run (loss, feed_dict = {xs: x_data, ys: y_data }))

Training weight value. The model fits the coefficients 1 and-0.5 of y = x ^ 2-0.5. The loss value is getting smaller and smaller, and the training parameters are getting closer and closer to the target result. Evaluate the model, compare the learning coefficient weights, biase Forward propagation, and true value y = x ^ 2-0.5, and calculate the accuracy based on the similarity.

Super parameter settings. Hyper-parameters, a framework parameter of the machine learning model. Manual setting and continuous trial and error.

The higher the learning rate, the shorter the training time and the faster the speed. The smaller the setting, the higher the training accuracy. Variable learning rate, the highest accuracy of the training process records, n consecutive rounds (epoch) did not reach the best accuracy, that accuracy is no longer improved, stop training, early stopping, no_improvement-in-n rules. The learning rate is halved. Gradually approaching the optimal solution, the smaller the learning rate, the higher the accuracy.

Mini-batch size. The size of each batch determines the weight update rules. The average value is obtained and the weight is updated only after the entire batch of sample gradients are calculated. The higher the batch, the faster the training speed. The matrix and linear algebra libraries are used for acceleration, and the weight update frequency is low. The smaller the batch, the slower the training speed. Set the machine hardware performance and dataset size.

Regularization parameter, λ ). Experience. Complex Networks have obvious over-fitting (training data accuracy is high, and test data accuracy is reduced ). Set 0 at the beginning, determine the learning rate, set a value for λ, and adjust it according to the accuracy.

References:
Analysis and Practice of TensorFlow Technology

Welcome to the Shanghai Machine Learning job opportunity, my qingxingfengzi

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.