Simple recording of the relationship between graph and session in TensorFlow

Source: Internet
Author: User
Tags constant eval mul

Transferred from: https://blog.csdn.net/xg123321123/article/details/78017997
This blog is transferred from the following blog:
TensorFlow Learning Notes 2:about Session, Graph, operation and Tensor
Cs20si:tensorflow for study Note 1

The following is the text:

1
TensorFlow is a graph-based computing system.
The nodes of a graph are composed of operations (operation), and each node of the graph is connected by tensor (Tensor) as an edge.
So the TensorFlow calculation process is a tensor flow graph.

2
TensorFlow has the concept of a diagram, operations will be added to the diagram as a node of the graph. When a operation is added, the operation is not executed immediately. TensorFlow will wait for all operation to be added and then optimize the calculation diagram to determine how the calculation is performed.

The tensor is the variable and constant in the code. All variables need to be initialized before starting the graph calculation, and all variables are initialized by calling Tf.initialize_all_variables (). Run ().

3
With TensorFlow, it typically takes three steps:

Create tensor;
Add operations (Operations input tensor, then output another tensor);
Perform the calculation (that is, run a computable graph).
The diagram of the TensorFlow must be computed in a session. The session provides an environment for operation execution and tensor evaluation. As shown in the following:

Import TensorFlow as TF Build a graph.

A = Tf.constant ([1.0, 2.0])
b = Tf.constant ([3.0, 4.0])
c = A * b Launch the graph in a session.

Sess = tf. Session () Evaluate the tensor ' C '.

Print Sess.run (c)
Sess.close () result: [3., 8.]

4
A session will have some resources, such as variable or queue. These resources need to be freed when we no longer need the session. There are two ways of

Call the Session.close () method;
Use with TF. Session () Creates a context to execute and is automatically freed when the context exits.
The above example can be written as follows:

Import TensorFlow as TF Build a graph.

A = Tf.constant ([1.0, 2.0])
b = Tf.constant ([3.0, 4.0])
c = A * b

With TF. Session () as Sess:
Print Sess.run (c)
5
If graph is not specified when the session is created, the session loads the default graph.
The constructor for the session class is as follows:

Tf. Session. Init (target= ", Graph=none, Config=none)
If you create multiple graph in a process, you need to create a different session to load each graph, and each graph can be loaded in multiple sessions for calculation.

There are two ways to perform operation or evaluate tensor:

Call the Session.run () method: The method is defined as follows, and the parameter fetches is one or more operation or tensor.

Tf. Session.run (fetches, Feed_dict=none)
Call Operation.run () or Tensor.eval () method: Both methods receive the parameter session, which specifies the session in which to calculate. However, this parameter is optional and defaults to none, at which point it is calculated in the process default session.

6
There are two ways to set a session as the default session:

The session defined in the With statement becomes the default session in that context, and the above example can be modified to:
Import TensorFlow as TF Build a graph.

A = Tf.constant ([1.0, 2.0])
b = Tf.constant ([3.0, 4.0])
c = A * b

With TF. Session ():
Print C.eval ()

Call the Session.as_default () method in the With statement. The above example can be modified to:
Import TensorFlow as TF

A = Tf.constant ([1.0, 2.0]) # Build a graph.
b = Tf.constant ([3.0, 4.0])
c = A * b
Sess = tf. Session ()
With Sess.as_default ():
Print C.eval ()
Sess.close ()

The Tf.graph class is used in TensorFlow to represent computable graphs. The graph is composed of the operation operation and tensor tensor, wherein the operation represents the node of the graph (i.e. the computational unit), while tensor represents the edge of the graph (the data unit that flows between the operation).

In TensorFlow, there is always a default graph. If you want to add operation to the default graph, you only need to call the function that defines operation (for example, Tf.add ()). If we need to define multiple graph, we need to call the Graph.as_default () method in the With statement to set the graph to the default graph, so that the operation or tensor called in the With statement block will be added to the graph.

Import TensorFlow as TF
G1 = tf. Graph ()
With G1.as_default ():
C1 = Tf.constant ([1.0])
With TF. Graph (). As_default () as G2:
C2 = Tf.constant ([2.0])

With TF. Session (GRAPH=G1) as Sess1:
Print Sess1.run (C1)
With TF. Session (GRAPH=G2) as Sess2:
Print Sess2.run (C2)

Result
[1.0]
[2.0]

If you swap the C1 and C2 in Sess1.run (C1) and Sess2.run (C2) of the above example for a position, the operation will be an error. Because the Sess1 loaded G1 does not have C2 this tensor, similarly, sess2 loaded G2 does not C1 this tensor.

8
A operation is a compute node in TensorFlow graph. It receives 0 or more tensor objects as input, and then produces 0 or more tensor objects as outputs. The Operation object is created by calling the Python operation method directly (for example, Tf.matmul ()) or Graph.create_op ().

For example, C = Tf.matmul (A, b) means that a operation of type Matmul is created, and the operation receives tensor a and tensor B as input and produces tensor C as the output.

When a graph is loaded into a session, you can call Session.run (OP) to execute the OP, or call Op.run () to execute (Op.run () is the abbreviation for the Tf.get_default_session (). Run ().

9
Tensor represents the output of the operation. However, tensor is only a symbolic handle, and it does not save the value of the operation output. The value of the tensor can be obtained by calling Session.run (tensor) or tensor.eval ().

10
Let's take a look at the graph calculation process for TensorFlow in the following code:

Import TensorFlow as TF
A = tf.constant (1)
b = Tf.constant (2)
c = Tf.constant (3)
D = tf.constant (4)
ADD1 = Tf.add (A, B)
MUL1 = Tf.mul (b, c)
ADD2 = Tf.add (c, D)
Output = Tf.add (Add1, MUL1)
With TF. Session () as Sess:
Print Sess.run (output) result:9

The above code is made up of graphs as shown below:

Write a picture description here

When the session loads graph, the compute nodes inside the graph are not triggered to execute. When the Sess.run (output) is run, the corresponding node is triggered back along the specified tensor output to the input path (the portion of the red line in the figure). When we need the value of output, trigger operation Tf.add (ADD1, MUL1) is executed, and the node needs tensor add1 and tensor mul1 value, then trigger Operation Tf.add (A, B) and Operation Tf.mul (b, c). And so on

Therefore, when calculating graph, it is not necessarily all the nodes in graph are computed, but the specified compute node or the output result of the node is required.

11
Lazy loading refers to the time that a deferred variable is created until it must be used. The following code can show the difference between loading and lazy loading: normal loading

x = tf. Variable (name= ' x ')
y = tf. Variable (name= ' y ')
z = Tf.add (x, y)
With TF. Session () as Sess:
Sess.run (Tf.global_variables_initializer ())
For _ in range (10):
Sess.run (z) lazy loading

x = tf. Variable (name= ' x ')
y = tf. Variable (name= ' y ')
With TF. Session () as Sess:
Sess.run (Tf.global_variables_initializer ())
For _ in range (10):
Sess.run (Tf.add (x, y))

Normal loading creates x and Y variables in the diagram while creating x+y operations, and the lazy loading only creates x and y two variables.

Normal loading in the session regardless of how many times to do x+y, only need to perform the addition of the z definition can be, and the lazy loading in the session every X+y, will create an addition operation in the diagram, if 1000 times x+y operation, The calculation diagram of normal loading does not change, and the calculation graph of the lazy loading will be 1000 nodes, each node represents the operation of X+y.

This is the problem caused by the lazy loading: it can seriously affect the reading speed of the graph.

12
You can place a portion of the calculation diagram on a specific GPU or CPU:
With Tf.device ('/gpu:2 '):
A = Tf.constant ([[1.0, 2.0, 3.0], [4.0, 5.0, 6.0]], name= ' a ')
b = Tf.constant ([[1.0, 2.0], [3.0, 4.0], [5.0, 6.0]], name= ' B ')
c = Tf.matmul (A, B)
Sess = tf. Session (CONFIG=TF. Configproto (log_device_placement=true)) #输出到日志
Print (Sess.run (c))

Try not to use more than one calculation diagram, because each calculation diagram requires a session, and each session will use all the graphics resources, you must use Python/numpy to pass data between two graphs, it is best to build two non-connected sub-graphs in a diagram.

Advantages of using graph:

Save compute resources, just run a sub-map of the results for each operation

The graph can be divided into small pieces for automatic differentiation

Easy to deploy on multiple devices

Many machine learning algorithms can be visualized as a graph structure

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.