TensorFlow Introduction (i) Basic usage

Source: Internet
Author: User
Tags constant documentation eval mul

Welcome reprint, but please be sure to indicate the source and author information. TensorFlow Introduction (i) Basic usage

Refer to:http://wiki.jikexueyuan.com/project/tensorflow-zh/get_started/basic_usage.html
@author: Huangyongye
@date: 2017-02-25

This example is mainly based on TensorFlow's Chinese documentation to learn the basic usage of tensorflow. According to the documentation, there are some major problems: 1. Is the use of the Session () and InteractiveSession (). The latter replaces Session.run () with Tensor.eval () and Operation.run (). More of them are tensor.eval (), and all expressions can be seen as Tensor. 2. In addition, all variables or constants in the TF expression should be of the type TF. 3. Whenever a variable is declared, it must be initialized with the Sess.run (Tf.global_variables_initializer ()) or X.initializer.run () method. Example one: Planar fitting

In this example, you can see a general procedure for machine learning: 1. Preparing data, 2. Structuring the model (setting the solve target function), 3. Solving the Model

Import TensorFlow as TF import numpy as NP # 1. Prepare data: Use NumPy to generate false data (phony data) for a total of 100 points. X_data = Np.float32 (Np.random.rand (2, 100)) # random input y_data = Np.dot ([0.100, 0.200], x_data) + 0.300 # 2. Constructs a linear model B = tf. Variable (Tf.zeros ([1])) W = tf. Variable (Tf.random_uniform ([1, 2], -1.0, 1.0)) y = Tf.matmul (W, x_data) + B # 3. Solving Model # Setting loss function: Mean variance of error loss = Tf.reduce_mea N (Tf.square (Y-y_data)) # Select the method of gradient descent optimizer = Tf.train.GradientDescentOptimizer (0.5) # iteration target: Minimize loss function train = Optimizer. Minimize (loss) ############################################################ # Below is a tf to solve the above task # 1. Initialize variables: The necessary steps for TF, If you declare a variable, you must initialize it to use init = Tf.global_variables_initializer () # to set the TensorFlow for GPU usage on demand config = tf. Configproto () config.gpu_options.allow_growth = True # 2. Start Graph (graph) Sess = tf. Session (config=config) Sess.run (init) # 3. Iterate over the Minimize loss function above (train op), fit planar for step in xrange (0, 201): Sess.run (t Rain) If step% = = 0:print Step, Sess.run (W), Sess.run (b) # to get the best fit result W: [[0.100  0.200]], B: [0.300] 
0 [[0.27467242  0.81889796]] [ -0.13746099] [
[0.1619305   0.39317462]] [0.18206716]
40 [[0.11901411< C4/>0.25831661] [0.2642329] [
[0.10580806  0.21761954]] [0.28916073] [
[0.10176832  0.20532639] ] [0.29671678] [
[0.10053726  0.20161074] [0.29900584] [[
0.100163    0.20048723]] [ 0.29969904] [[
0.10004941  0.20014738] [0.29990891] [[
0.10001497  0.20004457]] [ 0.29997244] [[[
0.10000452  0.20001349]] [0.29999167] [
[0.10000138  0.2000041]] [ 0.29999748]
example Two: Sum of two numbers
INPUT1 = tf.constant (2.0)
Input2 = tf.constant (3.0)
INPUT3 = tf.constant (5.0)

INTERMD = Tf.add (INPUT1, INPUT2)
mul = tf.multiply (Input2, INPUT3) with

TF. Session () as sess:
    result = Sess.run ([Mul, Intermd])  # executes multiple op
    print result print type at one time
    (result)
    print type (result[0])   
[15.0, 5.0]
<type ' list ' >
<type ' numpy.float32 ' >
1. Variables, constants 1.1 using TensorFlow to implement counters, mainly designed to call the addition implementation count in the loop
# Create a variable initialized to 0 state
= tf. Variable (0, name= "counter")

# creates an op whose function is to increase the state by 1 One
= tf.constant (1) # directly with 1.
New_value = Tf.add ( State, 1)
update = Tf.assign (state, New_value)


# After the start diagram, run update op with
TF. Session () as Sess:
    # After creating the diagram, the variable must be ' initialized ' 
    Sess.run (Tf.global_variables_initializer ())
    # to view the state's initialization value
    print Sess.run (state) for
    _ in range (3):
        sess.run (update)  # so that each run state is still 1
        print Sess.run (state)
0
1
2
3
1.2 use TF to sum up a group of numbers, then calculate the average
H_sum = tf. Variable (0.0, Dtype=tf.float32)
# H_vec = Tf.random_normal (Shape= ([ten]))
H_vec = Tf.constant ([ 1.0,2.0,3.0,4.0]
# Add each element of the H_vec to the h_sum and divide it by 10来 to calculate the average
number to add
H_add = Tf.placeholder (tf.float32)
# After adding the value
h_new = Tf.add (h_sum, H_add)
# Update h_new op update
= Tf.assign (H_sum, h_new) with

TF. Session () as Sess:
    Sess.run (Tf.global_variables_initializer ())
    # View original value
    print ' S_sum = ', Sess.run (h_ SUM)
    print "VEC =", Sess.run (H_vec)

    # Loop Add
    for _ in range (4):
        sess.run (update, Feed_dict={h_add: Sess.run (H_vec[_])})
        print ' H_sum = ', Sess.run (h_sum)

#     print ' The mean is ', Sess.run (Sess.run (h_sum)/ 4)  # So write 4  is wrong, must be converted to TF variable or constant
    print ' The mean is ', Sess.run (Sess.run (h_sum)/tf.constant (4.0))
s_sum = 0.0
VEC =  [1.  2.  3.  4.]
H_sum = 1.0
h_sum = 3.0
h_sum = 6.0
h_sum = 10.0 The
mean is  2.5
1.3 Use only one variable to implement the counter

The above counter is an example of an official TensorFlow document, but it makes you feel bloated, so I'm going to write a simpler one, just define a variable and a 1 operation (OP). It can be done with a for loop.

# if the assign () is not re-assigned, each time Sess.run () initializes state to 0.0 state
= tf. Variable (0.0, Tf.float32)
# Changes the value of state by assign operation.
add_op = tf.assign (state, state+1)

Sess.run (Tf.global_variables_initializer ())
print ' init state ', Sess.run (state) for
_ in Xrange (3):
    sess.run (add_op)
    print Sess.run (state)
Init state  0.0
1.0
2.0
3.0

This is basically the same as the way we usually implement the counter. It is important to understand that the value is assigned to the ref variable in TensorFlow by means of tf.assign (ref, value). In this way, the ref variable does not define the initialization operation when the cycle is done. 2. Use of InteractiveSession ()

InteractiveSession () is primarily to avoid the session (sessions) being held by a variable

A = Tf.constant (1.0)
B = tf.constant (2.0)
C = a + b

# The following two cases are equivalent with
TF. Session ():
    print c.eval ()

sess = tf. InteractiveSession ()
print c.eval ()
sess.close ()
3.0
3.0
A = Tf.constant (1.0)
B = tf.constant (2.0)
C = tf. Variable (3.0)
d = a + b

sess = tf. InteractiveSession ()
Sess.run (Tf.global_variables_initializer ())

###################
# It's wrong to
write # print A.run ()  
# Print D.run ()

####################

# This is the right
print a.eval ()   
print d.eval () The

# Run () method is used primarily for
x = tf. Variable (1.2)
# print X.eval ()  # is not initialized yet, cannot use
x.initializer.run ()  # X.initializer is an initialized op, OP only calls the Run () method
print X.eval ()

sess.close ()
1.0
3.0
1.2
2.1 How to use TF. InteractiveSession () to complete the sum of the above 1.2, the average operation ?
H_sum = tf. Variable (0.0, Dtype=tf.float32)
# H_vec = Tf.random_normal (Shape= ([ten]))
H_vec = Tf.constant ([ 1.0,2.0,3.0,4.0]
# Add each element of the H_vec to the h_sum and divide it by 10来 to calculate the average
number to add
H_add = Tf.placeholder (tf.float32)
# After adding the value
h_new = Tf.add (h_sum, H_add)
# Update h_new op update
= Tf.assign (h_sum, h_new)

sess = tf. InteractiveSession ()
Sess.run (Tf.global_variables_initializer ())
print ' S_sum = ', H_sum.eval ()
print "VEC =", H_vec.eval ()
print "VEC =", H_vec[0].eval () for


_ in range (4):
    Update.eval (feed_dict={h_a Dd:h_vec[_].eval ()})
    print ' H_sum = ', H_sum.eval ()
sess.close ()
s_sum = 0.0
VEC =  [1.  2.  3.  4.]
VEC =  1.0
h_sum = 1.0
h_sum = 3.0
h_sum = 6.0
h_sum = 10.0
3. Use a feed to assign a value to a variable

These operations that require a feed to be assigned can be described by the Tf.placeholder () to create placeholders.

The difference between session.run ([Output], ...) and session.run (output, ...) can be seen in the example below. The former outputs detailed information such as the type of output, which only outputs simple results.

INPUT1 = Tf.placeholder (tf.float32)
input2 = Tf.placeholder (tf.float32)
output = tf.multiply (INPUT1, Input2) With

TF. Session () as Sess:
    print Sess.run ([output], feed_dict={input1:[7.0], input2:[2.0]})
[Array ([], Dtype=float32)]
With TF. Session () as sess:
    result = Sess.run (output, feed_dict={input1:[7.0], input2:[2.0]})
    print type (result)
    Print result
<type ' Numpy.ndarray ' >
[14.]
With TF. Session () as sess:
    result = Sess.run (output, feed_dict={input1:7.0, input2:2.0})
    print type (result)
    Print result
<type ' Numpy.float32 ' >
14.0
With TF. Session () as Sess:
    print Sess.run ([output], feed_dict={input1:[7.0, 3.0], input2:[2.0, 1.0]})
[Array ([   3.], Dtype=float32)]
With TF. Session () as Sess:
    print sess.run (output, feed_dict={input1:[7.0, 3.0], input2:[2.0, 1.0]})
[   3.]

This article is code: https://github.com/yongyehuang/Tensorflow-Tutorial

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.