Preface:
TensorFlow There are many basic concepts to understand, the best way is to go to the official website followed by the tutorial step by step, there are some translated version, compared to see to help understand: tensorflow1.0 document translation text:
One, the necessary process of building and executing the calculation diagram
1,graph (Figure calculation): see TF. Graph class
Using TensorFlow to train a neural network consists of two parts: building a calculation diagram and running a calculation diagram.
First of all, the building diagram, a calculation diagram contains a set of operations (Operation objects, also known as nodes) and some tensor objects (data units passed between nodes). A calculation diagram is built by default, which is why you can define nodes on the mnist getting started case. However, when we write code, we should write the code of the building calculation diagram in a with block
Mygraph = tf. Graph () with
Mygraph.as_default ():
# Define operations and tensors in ' Mygraph '. ( Define OP and tensor below)
C = tf.constant (30.0)
2,operations (Figure node): see TF. Operation class
OP is a node in the calculation diagram that can receive tensor as input and can produce tensor as output. OP is created in two ways, one called a calculation operation, such as Tf.matmul (), tf.nn.conv2d (), Tf.nn.max_pool () function in the mnist example, and the function name is op. Another method is to call the Graph.create_op () method to add op to the diagram. The first method is commonly used. The OP is not executed at the stage of the build diagram, but only during the run diagram phase.
3,tensor (vector): see TF. Tensor class
The tensor is the input/output result of the OP calculation. Similarly, tensor does not hold a value when building a diagram, but rather holds the values at run time. The tensor as OP input/output is passed in graph, so that TensorFlow can perform a computational diagram representing large scale computations, and that is why TensorFlow is named (vector flow). Constants and variables are tensor. As an example, A,b,c,d is tensor, which can be used as the output value of OP or as input to Op.
# Build a dataflow graph.
A = Tf.constant ([[1.0, 2.0], [3.0, 4.0]])
B = Tf.constant ([[1.0, 1.0], [0.0, 1.0]])
C = Tf.matmul (c, d)
d = TF . Variable ([1.0, 2.0])
4,session (session): see TF. Session class
In the first 3 steps we have built a calculation diagram and now we need to execute the calculation diagram. The session is the class that executes the calculation diagram. The execution of the tensor is performed through Sess.run (tensor). The session object encapsulates the environment for OP execution and tensor delivery. The session object will hold resources when it is opened, so remember to close it when you are done with it, and usually write it to a with block. Synthesis 1234, we write an example.
Mygraph = tf. Graph () with
Mygraph.as_default ():
# Build a graph.
A = tf.constant (5.0)
B = tf.constant (6.0)
C = A * b
# Using the context manager.
With TF. Session () as Sess:
print (Sess.run (c))
Second, tensor related properties and methods
The properties of 1,tensor, commonly used are the following:
Types of elements in Dtype:tensor (Tf.int32, tf.float32, etc.)
Graph:tensor (returns the address of the graph class object)
Name:tensor string name, if you define tensor name, then output the name you take. If you do not take, the system will help you to tensor name, the rules are as follows: If tensor is a constant, then the first tensor named "Const", the second tensor named "Const_1", the third tensor named "Const_2", and so on. If the tensor is a variable, then the first tensor named "Variable", the second tensor named "Variable_1", the third tensor named "Variable_2", and so on. This knowledge point is used in the Save/restore model.
OP: The OP name that generated the change tensor. Details about the OP are listed.
The shape of the shape:tensor.
For example (tensor properties can be printed directly, do not need to be put into the Sess.run ()):
# encoding=utf-8
import tensorflow as tf
graph = tf. Graph () with
Graph.as_default ():
x= tf.constant ([[1.,2.,3.,4.],[5.,6.,7.,8.]],dtype=tf.float32]
x1= Tf.constant ([[1.,2.,3.,4.],[5.,6.,7.,8.]],dtype=tf.float32
] x2= tf.constant ([[1.,2.,3.,4.],[5.,6.,7.,8.]], Dtype=tf.float32)
y= TF. Variable ([[[0.,0.,-2.,1.],[-1,-2,-2,-3]])
y1= TF. Variable ([[[0.,0.,-2.,1.],[-1,-2,-2,-3]])
y2= TF. Variable ([[[0.,0.,-2.,1.],[-1,-2,-2,-3]])
y3= y1+y2
# shape:tensor Shape
# size:tensor elements
# rank: Tensor dimension with
TF. Session (Graph=graph) as Sess:
print (X.shape)
print (x.dtype) print (
x.graph)
print (x.name)
print (x1.name)
print (x2.name) print (
y.name) print (
y1.name) print
(y2.name)
Print (Y3.shape)
Methods of 2,tensor
–get_shape (): Gets the shape of the tensor, with the same effect as the Shape property.
–eval (Feed_dict=none, Session=none): Executes the tensor method, which invokes this method to execute all the OP input required by the tensor. The eval () method is called in the session. The Tensor.eval () effect is the same as Sess.run (tensor), which is said in the original: T.eval () is a shortcut for calling Tf.get_default_session (). Run (t). At least I don't see the difference between the two.