TensorFlow Primer (Basic Syntax, applet) _tensorflow

Source: Internet
Author: User
Tags rand

This article references from: The Python-tensorflow Tutorial series TensorFlow Getting Started: Using graphs to represent computational tasks. Executes the diagram in the context of what is referred to as a conversation (session). Use tensor (tensor) to represent the data. Maintains state through variable (Variable). Use feeds and fetches to assign values to or fetch data from any operation (arbitrary operation). First, the basic syntax:

Syntax Example 1:

# create 2 matrices, the former 1 rows 2 columns, the latter 2 rows 1 columns, and then matrix multiplication:
matrix1 = Tf.constant ([[[3,3]])
matrix2 = Tf.constant ([[[2], [2]])
Product = Tf.matmul (MATRIX1,MATRIX2)

# The above operation is to define the diagram, and then use session sessions to calculate: with
TF. Session () as Sess:
    result2 = sess.run (product)
    print (RESULT2)

Syntax Example 2:

# define a TensorFlow variable: state
= tf. Variable (0, name= ' counter ')
# define constant One
= tf.constant (1)
# define Add step (Note: This step is not directly calculated)
New_value = Tf.add ( State, one)
# updates state to New_value
update = Tf.assign (state, New_value)
# variable variable need to initialize and activate, and print words only through Sess.run ():
init = Tf.global_variables_initializer ()
# uses session to compute with
TF. Session () as Sess:
    sess.run (init) for
    _ in range (3):
        sess.run (update)
        print (Sess.run)

Syntax Example 3:

# If you want to pass in a value, use TensorFlow placeholder, temporarily store the variable,
# in this form feed data: sess.run (* * *, Feed_dict={input: * *})
#在 TensorFlow need to define Placeholder type, generally float32 form
input1 = Tf.placeholder (tf.float32)
input2 = Tf.placeholder (tf.float32)
# mul = Multiply is a multiplication of the input1 and Input2, and outputs as output 
ouput = tf.multiply (INPUT1, Input2) with
TF. Session () as Sess:
    print (Sess.run (ouput, FEED_DICT={INPUT1: [7.], Input2: [2.]}))
# Output [14.]
Second, the small procedure:

Small Program Example 1:

Import TensorFlow as TF
import numpy as NP

# example 1, fitted with y_data functions, weights and offsets are nearly 0.1 and 0.3

# Np.random.rand (100) Generate 100 [ 0,1] The random number that forms the 1-d array
# Np.random.rand (2,3) generates a two-dimensional array of 2 rows 3 columns
x_data = np.random.rand (MB). Astype (np.float32)
Y_ data = X_data * 0.1 + 0.3

# Weight Offset These constantly updated values are stored with the TF variable,
# tf.random_uniform () parameter meaning: (shape,min,max)
# Offset initialized to 0
weights = tf. Variable (Tf.random_uniform ([1],-1.0,1.0))
biases = tf. Variable (Tf.zeros ([1]))

y = weights * x_data + biases

# loss function. Tf.reduce_mean () is the mean value. Square is squared.
loss = Tf.reduce_mean (Tf.square (Y-y_data))

# Minimize loss function with gradient optimization method.
optimizer = Tf.train.GradientDescentOptimizer (0.5)
train = optimizer.minimize (loss)

# TF variables need to be initialized, Furthermore, the Sess.run (init) is required to compute
the init = Tf.global_variables_initializer ()

# Session to compute the WITH
TF. Session () as Sess:
    sess.run (init) to step by
    range (201):
        Sess.run (train)
        if step% = 0:
            Print (step, sess.run (weights), sess.run (biases))

Small Program Example 2:

# example 2, build a neural network # Add a function of the neural layer, it has four parameters: input value, input shape, output shape and excitation function, # Wx_plus_b is an inactive value, function returns the activation value. def add_layer (inputs, in_size, Out_size, Activation_function=none): # tf.random_normal () parameter is shape, you can also specify mean and standard deviation Weig HTS = tf. Variable (Tf.random_normal ([In_size, out_size]) biases = tf. Variable (Tf.zeros ([1, out_size]) + 0.1) Wx_plus_b = Tf.matmul (inputs, Weights) + biases if activation_function is None:outputs = Wx_plus_b else:outputs = activation_function (wx_plus_b) return outputs # Build training Data # NP . Linspace () in between-1 and 1, such as poor students into 300 numbers # noise is a normal distribution of noise, the first two parameters are normal distribution parameters, then the size X_data = Np.linspace ( -1,1,300, Dtype=np.float32) [ :, Np.newaxis] noise = np.random.normal (0, 0.05, x_data.shape). Astype (np.float32) Y_data = Np.square (x_data)-0.5 + Nois
E # uses placeholders to define the input of the neural networks we need.
# The second parameter, Shape:none, represents the number of rows, and 1 is the number of columns.
# The number of rows here is the number of samples, and the number of columns is the number of characters per sample. 
xs = tf.placeholder (Tf.float32, [None, 1]) ys = Tf.placeholder (Tf.float32, [None, 1]) # input Layer 1 neurons (because only one feature), hidden layer 10, output layer 1. # Call function to define hidden and output layers, enter size is the number of neurons in the upper layer (full companyThen, the output size is the number of this layer. L1 = Add_layer (xs, 1, activation_function=tf.nn.relu) prediction = Add_layer (L1, 1, activation_function=none) # meter
Calculates the error of the predicted value prediction and the real value, and then takes the average of the difference squared as the loss function.
# reduction_indices represents the compressed dimension of the final data, as if this parameter is not generally used (that is, to 0 d, a scalar).  Loss = Tf.reduce_mean (Tf.reduce_sum (Tf.square (ys-prediction), reduction_indices=[1)) Train_step = Tf.train.GradientDescentOptimizer (0.1). Minimize (loss) # Initialize variable, activate, execute operation init = Tf.global_variables_initializer () with Tf. Session () as Sess:sess.run (init) to I in range (1000): # Training Sess.run (train_step,feed_dict={
 Xs:x_data,ys:y_data}) If I% = = 0:print Sess.run (Loss,feed_dict{xs:x_data,ys:y_data})

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.