Google TensorFlow Artificial Intelligence Learning System introduction and basic use of induction _ AI

Source: Internet
Author: User
Tags mul sessions
TensorFlow TensorFlow (Tengsanfo) is Google based on the development of the second generation of artificial intelligence learning system, its name comes from its own operating principles. Tensor (tensor) means n-dimensional arrays, flow (stream) means the computation based on data flow diagram, TensorFlow flows from one end of the flow graph to the other. TensorFlow is a system that transmits complex data structures to artificial neural networks for analysis and processing. TensorFlow can be used in a wide range of machine-depth learning areas such as speech recognition or image recognition, which has been improved in the 2011 's in-depth learning infrastructure distbelief, which runs on a variety of devices as small as a smartphone and up to thousands of data center servers. TensorFlow will be completely open source and can be used by anyone. The support algorithm TensorFlow express the High level machine learning computation, simplifies the first generation system greatly, and has the better flexibility and the ductility. One of the highlights of TensorFlow is the support for distributed computing of heterogeneous devices, which can automate models on a variety of platforms, from mobile phones to individual cpu/gpu to hundreds of GPU cards distributed systems.
From the current documentation, TensorFlow supports the CNN, RNN, and lstm algorithms, which are the most popular deep neural network models currently in Image,speech and NLP. Open source meaning this time Google Open source depth learning system TensorFlow can be used in many places, such as speech recognition, natural language understanding, computer vision, advertising and so on. However, based on the above argument, we should not exaggerate the role of TensorFlow in an industry machine learning system. In a complete industry speech recognition system, in addition to the depth learning algorithm, there are many areas of expertise related to the algorithm, as well as massive data collection and engineering system architecture. But overall, this time Google's open source is meaningful, especially for many Chinese startups, most of them do not have the ability to understand and develop an international synchronization of the deep learning system, so tensorflow will greatly reduce the depth of learning in various industries in the application of difficulty. Basic use

With TensorFlow, you have to understand TensorFlow: Use graph to represent computing tasks. Executes the diagram in the context of what is referred to as a conversation (session). Use tensor to represent data. Maintains state through variable (Variable). Use feeds and fetches to assign values to or fetch data from any operation (arbitrary operation). Review

TensorFlow is a programming system that uses graphs to represent computational tasks. The nodes in the diagram are called OP (operation). An op obtains 0 or more Tensor, performs calculations, and produces 0 or more Tensor. Each Tensor is a typed multidimensional array. For example, you can represent a small set of images as a four-dimensional floating-point array of four dimensions [batch, height, width, channels].

A TensorFlow diagram describes the computational process. In order to perform the calculation, the diagram must be started in the session. A session distributes a diagram's op to a device such as a CPU or GPU, while providing a way to execute the op. After these methods are executed, the resulting tensor are returned. In the Python language, the returned tensor is the NumPy Ndarray object; In the C and C + + languages, the returned tensor is an tensorflow::tensor instance. Calculation diagram

TensorFlow programs are usually organized into a build phase and an execution phase. During the build phase, the execution steps of the OP are described as a graph. In the execution phase, the OP in the execution diagram is executed using the session.

For example, you typically create a diagram in the build phase to represent and train a neural network, and then repeatedly perform the training op in the diagram during the execution phase.

TensorFlow supports C, C + +, and Python programming languages. Currently, TensorFlow's Python library is more user-friendly, providing a number of helper functions to simplify the work of building diagrams that have not been supported by C and C + + libraries.

Session libraries in three languages is consistent. Build diagram

The first step in building the diagram is to create the source op. The source OP does not require any input, such as constants (Constant). The output of the source OP is passed to other OP operations.

In the Python library, the return value of the OP constructor represents the output of the constructed OP, which can be passed to other OP constructors as input.

The TensorFlow Python Library has a default graph, which the OP Builder can add nodes to. This default diagram is sufficient for many programs. Read the Graph class document to learn how to manage multiple graphs.

Import TensorFlow as TF

# Creates a constant op that produces a 1x2 matrix. The OP is added as a node
# to the default diagram.
The
return value of the # constructor represents the return value of the constant op.
matrix1 = Tf.constant ([[3., 3.]])

# Create another constant op to produce a 2x1 matrix.
matrix2 = Tf.constant ([[[2.],[2.]])

# Create a matrix multiplication Matmul op, put ' matrix1 ' and ' matrix2 ' as input.
# The return value ' product ' represents the result of matrix multiplication.
Product = Tf.matmul (matrix1, matrix2)

The default figure now has three nodes, two constant () op, and a Matmul () op. In order to really do the matrix multiplication and get the result of the matrix multiply, you have to start the diagram in the session. To start a diagram in a session

The diagram cannot be started until the construction phase is complete. The first step in the start diagram is to create a session object that, if no parameters are created, will start the default diagram.

To learn about the full session API, read the sessions class.

# start the default diagram.
Sess = tf. Session ()

# calls the Sess ' run () ' method to perform matrix multiplication op, passing in ' product ' as a parameter to the method. 
# mentioned above, ' product ' represents the output of the Matrix multiplication op, which is passed to the method indicating that we want to retrieve the
# Matrix multiplication op output.
#
# The entire execution process is automated, and the session is responsible for passing all the input required by the Op. The OP is usually executed concurrently.
# 
# function calls ' run (product ') ' triggers the execution of three op (two constant op and one matrix multiplication op) in the diagram.
#
The return value ' result ' is a numpy ' Ndarray ' object.
result = Sess.run (product)
print result
# ==> [[.]]

# task complete, close session.
Sess.close ()

The session object needs to be closed to free resources after it has been used. In addition to explicitly calling close, you can use the "with" code block to automatically complete the shutdown action.

With TF. Session () as Sess: result
  = Sess.run ([Product])
  Print result

On implementation, TensorFlow transforms a graphical definition into a distributed operation to take full advantage of available computing resources, such as CPUs or GPU. Generally you do not need to explicitly specify the use of CPU or GPU, TensorFlow can automatically detect. If the GPU is detected, TensorFlow will use the first GPU found as much as possible to perform the operation.

If there is more than one GPU available on the machine, the other GPU, except the first one, does not participate in the calculation by default. In order for TensorFlow to use these GPU, you must explicitly assign the OP to them for execution. With ... The Device statement is used to assign a specific CPU or GPU to perform operations:

With TF. Session () as Sess: With
  tf.device ("/gpu:1"):
    matrix1 = Tf.constant ([[3., 3.]])
    matrix2 = Tf.constant ([[[2.],[2.]])
    Product = Tf.matmul (matrix1, matrix2)
    ...

The device is identified with a string. Currently supported devices include: "/cpu:0": the CPU of the machine. " /gpu:0 ": The first GPU of the machine, if any. "/gpu:1": The second GPU of the machine, and so on.

Read about using the GPU chapters for more information on the TensorFlow GPU. Interactive use

The Python example in the document starts the diagram with a session sessions and calls the Session.run () method to perform the action.

To facilitate the use of a Python interaction environment such as IPython, you can use InteractiveSession instead of the session class, using the Tensor.eval () and Operation.run () methods instead of session.ru N (). This avoids using a variable to hold the session.

# Enter an interactive tensorflow session.
Import TensorFlow as tf
sess = tf. InteractiveSession ()

x = tf. Variable ([1.0, 2.0])
a = Tf.constant ([3.0, 3.0])

# initializes the ' X ' initializer using the Init X.initializer.run op's Run () method 
(

# Add a subtraction sub op, subtract ' a ' from ' X '. Run subtraction op, output 
sub = tf.sub (x, a)
print sub.eval ()
# ==> [-2.-1.]
Tensor

The TensorFlow program uses a tensor data structure to represent all the data, and in the calculation diagram, the data passed between operations is tensor. You can think of TensorFlow tensor as an array or list of n-dimensional. A tensor contains a static type rank, and a shape. To understand how TensorFlow deals with these concepts, see Rank, Shape, and Type. Variable

Variables for more details. A variable maintains state information during the execution of a diagram. The following example shows how to use a variable to implement a simple counter. See the variables section for more details.

# Create a variable, initialized to scalar 0.
state = TF. Variable (0, name= "counter")

# Create an op, which is to increase state by 1 One

= tf.constant (1)
new_value = Tf.add (state, one) 
  update = Tf.assign (state, New_value)

# After the start diagram, the variable must first be initialized with an ' init ' op,
# must first add an ' initialization ' op to the diagram.
Init_op = Tf.initialize_all_variables ()

# start diagram, run op with
TF. Session () as Sess:
  # running ' init ' op
  sess.run (init_op)
  # printing ' state ' initial value print
  Sess.run (state)
  # Run OP, update ' state ' and print ' state ' for
  _ in range (3):
    sess.run (update)
    print Sess.run (state)

# output:

# 0
# 1 #
2
# 3

The Assign () action in the code is part of the expression depicted by the diagram, just as the Add () operation. So before calling run () executes an expression, it does not actually perform an assignment operation.

A parameter in a statistical model is typically represented as a set of variables. For example, you can store the weights of a neural network as a variable in a tensor. During the training process, update the tensor by running the training diagram repeatedly. Fetch

To retrieve the output from the operation, you can pass in some tensor when you call the execution diagram using the Session object's run (), these tensor will help you retrieve the results. In the previous example, we only retrieved a single node state, but you can also retrieve multiple tensor:

INPUT1 = tf.constant (3.0)
Input2 = tf.constant (2.0)
INPUT3 = tf.constant (5.0)
intermed = Tf.add (Input2, INPUT3)
mul = Tf.mul (INPUT1, intermed) with

TF. Session ():
  outcome = Sess.run ([Mul, intermed])
  print result

# output:
# [Array ([.], Dtype=float32), array ([7.], Dtype=float32)]

The number of tensor values that need to be obtained, together with the OP in one run (instead of getting the tensor individually). Feed

The above example introduces tensor in a calculation diagram, stored as a constant or variable. TensorFlow also provides a feed mechanism that temporarily replaces tensor in any operation in the diagram to submit a patch to any operation in the diagram, directly inserting a tensor.

The feed uses a tensor value to temporarily replace the output of an operation. You can provide feed data as parameters for the run () call. The feed is only valid within the method in which it is invoked, and the feed disappears when the method ends. The most common use cases are to designate certain special actions as "feed" actions, and the markup method is to use Tf.placeholder () to create placeholders for these operations.

INPUT1 = Tf.placeholder (tf.types.float32)
input2 = Tf.placeholder (tf.types.float32)
output = Tf.mul (input1 , Input2) with

TF. Session () as Sess:
  print Sess.run ([Output], feed_dict={input1:[7.], input2:[2.]})

# output:
# [14.] , Dtype=float32)]

For a larger-scale example of feeds. If the Feed, placeholder ()   operation is not supplied correctly, an error will be generated.  mnist fully connected feeds tutorial   ( Source code) gives an example of a larger-scale use of the feed.

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.