Tensorflow_fold Tensorflow_fold in Jupyter notebook
Effects preview as shown above, the environment is CentOS7 + Python with TensorFlow1.0 (Fold include) I to add kernel for Jupyter
Jupyter is generally not our own set of the env under the python, such as I include Tensorflow_fold Library of Python is in source activate tensorflow1.0, so adding python to the notebook can be acquired in notebook.
sudo pip install-u ipykernel
# source Activate tensorflow1.0 python-m ipykernel install--user
To explain briefly, if it is a Linux environment, there is a package called Ipykernel can easily add kernel, but it should be noted that the current Python is not want to add to the Python Oh, you can use the which Python check it out.
After the import is successful, you can see the kernel in new. About the choice of the GPU to use
If you have more than one GPU on your machine, and you want to use just one of them to run it, you'll be able to do it (like the four-way Titan in our group, I'll run a little program and not move so many great gods), given that TensorFlow will take all the GPU.
Well, before the code (please note that it is before the import.) Add the following code, for example, I use a GPU with an ordinal number of 0:
Import os
os.environ["Cuda_device_order"]= "pci_bus_id" os.environ["cuda_visible_devices"
]= "0"
Tips:import TensorFlow and Import Tensorflow_fold will determine the use of the GPU when the import, so be sure to set the Environ first, and then import the two brothers. About Tensorflow_fold The dynamic interfaces on a static frame dynamically batching algorithm is tedious, but don't be careful, this process is done automatically by the framework, as a user of the framework, we just know how to call the official interface. The new TensorFlow fold is a tensorflow package, designed to refer to some of the ideas of functional programming to facilitate users to quickly build dynamic computing diagrams. Let's take a quick look and learn more about the official teaching documents. Documents TensorFlow Fold provides functions specifically for processing sequences (x1,..., xn):
Map (f): Compute [F (x1),..., f (xn)] to apply function f to the elements of each sequence, for example, convert every word in a sentence into a word vector; Fold (g, z): Compute g (..., g (Z, x1), x2), ..., xn), for example, to expand a rnn (circular neural network) ; Reduce (g): Compute g (reduce (g) [X1,..., Xn/2],reduce (g) [Xn/2,..., xn] to apply function g to a balanced binary tree, such as Max or sum-pooling for elements in a sequence. Since TensorFlow's original basic unit tensor is not suitable for building dynamic graphs, fold introduces a new basic component block. The block has an explicit input type and an output type, including:
Input: A programming language such as a python element, such as a dictionary; Tensor: a TensorFlow Basic module with data types and shapes; Tuple (T1,..., tn): Each t in parentheses represents the type of the corresponding position; Sequence (t) : An indefinite sequence of elements with a type T; Void: a cell type. These basic types can be nested with each other, such as the input type of a block can be a tuple of the input type. The basic functions used to create blocks are:
Scalar: Converts the Python scalar to tensor;tensor: Converts the numpy array to Tensor;function (h): Creates a operation; Inputtransform (h): for preprocessing python types. The basic functions used to combine blocks are:
B1>>B2, Pipelining (pipeline): B1 output as B2 input; record ({l1:b1,..., ln:bn}): Accept a Python dictionary as input, the value of the key value in the dictionary is applied; oneof (B1, (BN): Apply B1,... bn in terms of input conditions; Optional (b): A special case of oneof, if input is not none, apply B; AllOf (B1,..., bn): Enter each of the applications. The advanced functions used to combine blocks are:
Composition (): Pipeline upgrade, the pipeline can only handle the serial process, composition () Create a Scope object, within the scope of indentation, using B.reads (B1,..., bn) to read multiple data streams, Can be used to build multiple branching structures; Forwarddeclaration (): Used to create a recursive structure that can first define a placeholder expression expr, and so on, after the expression is defined and then expr.resolve_to (EXPR_DEF), Recursively substituting an expression, which is an essential tool for creating a tree-structured calculation diagram. Coding Logs
Knock a few Taker ...
"" "
TD. Inputtransform (FN):
Python Function to Blocks
"" "
def func (alist): Return
(alist[3), alist[0]+alist[ 1]
B = TD. Inputtransform (func)
b.eval ([1,2,3,4]) # => (4,3)
"" Tf.split (value, Num_or_size_splits, Axis=0, Num=none, name= ' split '): Splits a tensor into sub tensors. "" " # ' value ' is a-tensor with shape [5,] # Split ' value ' into 3 tensors with sizes [4, M] along Dimension 1 split0, S Plit1, Split2 = Tf.split (value, [4, one], 1) tf.shape (split0) ==> [5, 4] Tf.shape (split1) ==> [5,] Tf.shape (sp LIT2) ==> [5, one] # Split ' value ' into 3 tensors along Dimension 1 split0, split1, split2 = Tf.split (value, num_or_size _splits=3, Axis=1) tf.shape (split0) ==> [5]