Basic usage of TensorFlow (v)--create neural networks and train

Source: Internet
Author: User

Article Author: Tyan
Blog: noahsnail.com |  CSDN | Pinterest

This article is mainly about the use of TensorFlow to create a simple neural network and training.

#!/usr/bin/env python # _*_ coding:utf-8 _*_ import tensorflow as TF import numpy as NP # Create a neural network layer def add_layer (input , In_size, out_size, activation_function = None): "" ":p Aram Input: Inputs:p The neural network layer Aram In_zize: output The size of the data:p Aram Out_size: The size of the output data:p Aram activation_function: Neural network activation function, default No "" "# defines the initial neural network Weight Weights = tf. Variable (Tf.random_normal ([In_size, Out_size]) # Defines the bias of the neural network biases = tf. Variable (Tf.zeros ([1, out_size]) + 0.1) # calculation W*x+b w_mul_x_plus_b = Tf.matmul (input, Weights) + biases # based on whether there is a stress Live functions are processed if activation_function is none:output = w_mul_x_plus_b Else:output = activation_function (w_mul_x_plus_b) Return output # Creates a three-layer neural network with input layer, hidden layer, output layer, the number of neurons is 1,10,1 # to create only one characteristic of the input data, the number of data is 300, the input layer X_data = Np.li Nspace ( -1, 1, +) [:, Np.newaxis] # Create noise in data noise = np.random.normal (0, 0.05, x_data.shape) # Create output for input data Y_data = Np.squ Is (X_data) + 1 + Noise # Defines the input data, none is the number of samples, the tableShows how much input data is OK, 1 is the number of features of the input data xs = tf.placeholder (Tf.float32, [None, 1]) # define output data, with xs equals ys = Tf.placeholder (Tf.float32, [None, 1]) # define a hidden layer Hidden_layer = Add_layer (xs, 1, ten, activation_function = tf.nn.relu) # define output Layer prediction = Add_layer (hidden_la Yer, 1, activation_function = None) # Solve the neural network parameters # define the loss function loss = Tf.reduce_mean (Tf.reduce_sum (Tf.square (ys-prediction ), reduction_indices = [1]) # defines the training process Train_step = Tf.train.GradientDescentOptimizer (0.1). Minimize (loss) # variable initialization init = TF . Global_variables_initializer () # defines session Sess = TF.  Session () # performs initialization work sess.run (INIT) # Training for I in range (1000): # Perform training and pass in data Sess.run (train_step, feed_dict = {xs):
X_data, ys:y_data}) if I% = = 0:print Sess.run (loss, feed_dict = {xs:x_data, ys:y_data}) # Close session Sess.close ()

The results of the implementation are as follows:

1.06731
0.0111914
0.00651229
0.00530187
0.00472237
0.00429948
0.00399815
0.00377548
0.00359714
0.00345819
ReferencesHttps://www.youtube.com/user/MorvanZhou

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.