TensorFlow [1] is a interface for expressing machine learning algorithms, and a implementation for executing such Algori THMs.
TensorFlow function: 1, provide interface to express machine learning algorithm. 2. Perform these machine learning algorithms.
A computation expressed using TensorFlow can be executed with little or no change on a wide variety of heterogeneous syste MS, ranging from mobile devices such as phones and tablets up to large-
Using the computational model of TensorFlow expression, you can run on a heterogeneous system without modification or modification, including mobile devices (mobile phones, etc.).
Scale distributed systems of hundreds of machines and thousands of computational devices such as GPU cards. The system is flexible and can being used to express a wide variety of algorithms, including training and inference algorithm s for the Deep neural network models, and it have been used for conducting the and for deploying machine learning systems into production across to than a dozen areas of computer science and other fields, including speech recognition, compute R Vision, robotics, information retrieval, natural language processing, geographic information extraction, and computation Al drug Discovery. This paper describes the TensorFlow interface and a implementation of that interface that we had built at Google. The TensorFlow API and a reference implementation were released as an Open-source package under the Apache 2.0 license in November and is available at www.tensorflow.org.
TensorFlow programming model and basic concepts
A tensorflow computation is described by a directed graph, and which is composed of a set of nodes. The graph represents a dataflow computation, with extensions for allowing some kinds of nodes to
A tensorflow calculation uses a graph of the nodes that comprise the node set to describe it (simply: a flow graph with a forward tensor calculation). A graph expresses the computation of a data stream, allows the node to maintain and update the state, and has the function of branching and looping control structures to control the execution of the graph. The client constructs a computed graph using Python or the C + + language. The following example is a program and a process diagram that builds and executes a tensorflow diagram using Python.
Import TensorFlow as TF
b = tf. Variable (Tf.zeros ([+]) # 100-d vector, Init to zeroes
W = tf. Variable (Tf.random_uniform ([784,100],-1,1)) # 784x100 Matrix W/rnd Vals
x = Tf.placeholder (name= "x") # Placeholder for input
Relu = Tf.nn.relu (Tf.matmul (W, x) + b) # Relu (wx+b)
C = [...] # cost computed as a function # of Relu
s = tf. Session ()
For step in xrange (0, 10):
Input = ... construct 100-d input array ... # Create 100-d vector for input
result = S.run (C, Feed_dict={x:input}) # Fetch cost, feeding x=input
Print step, result
In a tensorflow graph with each node have zero or more inputs and zero or more outputs, and represents the instantiation of an Operation. Values that flow along normal edges in the graph (from outputs
In the TensorFlow graph, each node has 0 or more inputs and 0 or more outputs, and represents an instantiation of an operation (true calculation). The data (output and input) that flows along the normal side of the graph is tensor. An array of any dimension that can be explicitly known or automatically inferred at the time of building the diagram.
There can also be a special edge in the graph, called control dependency: No data flows through these edges, and the function is to ensure that the operation of the source node must be done before destination node executes.
Since Our model includes mutable state, control dependencies can is used directly by clients to enforce happens before rel Ationships. Our implementation also sometimes inserts control dependencies
Because our model contains an unstable state, the control dependency can be directly used by the client to force the occurrence of the relationship before it is generated???
Our applications are sometimes also plugged into control dependencies, to enforce orderings between otherwise independent operations as a-a, for example, controlling the PE AK memory usage.
Operation and Nuclear
The operation represents an abstract calculation (matrix multiplication, addition). An operation has multiple properties that must be given property information or can be inferred at the time of building the diagram so that a node can be instantiated to perform the operation. A common attribute usage is to perform a polymorphic operation (add two floating-point tensor or add a tensor of two int32 types)
Kernel: Represents the implementation of a specific operation that can be performed on a specific device (CPU/GPU).
The binary of TensorFlow defines a set of operations and cores that are available, and the operations and cores inside this set are added through the registration mechanism. This collection is available by connecting additional completion registration operations or cores to achieve the purpose of the extension.
Session
Session: The client interacts with the TensorFlow system through the session. To create a calculated graph, the session interface supports additional methods to add additional nodes or edges to expand the diagram maintained by the current session.
The other primary operation supported by the session interface are Run, which takes a set of output names that need to be C Omputed, as well as a optional set of tensors to is fed into the graph in
Another main operation supported by the session interface is run, the output name that needs to be computed at input (Product=tf.matmul (m1,m2), where product is the input), and an optional amount of tensor that can be used for the Fed ( result = sess.run(product) result就是可选张量
).
Place of certain outputs of nodes. Using the arguments to Run, the TensorFlow implementation can compute the transitive closure of all nodes that must be exe Cuted in order to compute the outputs that were requested, and can then arrange to execute the appropriate nodes in an Ord Er that respects their dependencies (as described in more detail in 3.1). Most of our uses of TensorFlow set up a Session with a graph once, and then execute the full graph or a few distinct SUBGR Aphs thousands or millions of times via Run calls.
To be continued 20161220
TensorFlow White Paper