The 2--python API for Mxnet Research

Source: Internet
Author: User
Tags shuffle mxnet
0. Python API http://mxnet.io/api/python/ 1. Ndarray

A ndarray is a multidimensional container of items of the same type and size.

>>> x = Mx.nd.array ([[1, 2, 3], [4, 5, 6]])
>>> type (x)
<class ' Mxnet.ndarray.NDArray ' >< C4/>>>> X.shape
(2, 3)
>>> y = x + mx.nd.ones (x.shape) *3
>>> print (Y.asnumpy ())
[[4.  5.  6.]
 [7.  8.  9.]]
>>> z = y.as_in_context (mx.gpu (0))
>>> print (z)
<ndarray 2x3 @gpu (0) >
2. Symbol

A symbol declares computation. It is composited by operators, such as simple matrix operations (e.g. "+"), or a neural network (layer e.g. convolution LA Yer). We can bind data to a symbol to execute the computation

>>> a = mx.sym.Variable (' a ')
>>> b = mx.sym.Variable (' b ')
>>> c = 2 * A + b
>> ;> type (c)
<class ' Mxnet.symbol.Symbol ' >
>>> e = C.bind (Mx.cpu (), {' A ': Mx.nd.array ([1,2]), ' B ': Mx.nd.array ([2,3])})
>>> y = E.forward ()
>>> y
[<ndarray 2 @cpu (0)]
>>> y[0].asnumpy ()
Array ([4.,  7.], Dtype=float32)

bind (Data_shapes, Label_shapes=none, For_training=true, Inputs_need_grad=false, Force_rebind=false, Shared_ Module=none, grad_req= ' write ')
Bind the symbols to construct executors. This is necessary before one can perform computation with the module.

Parameters: Data_shapes(List of (str, tuple)) –typically is data_iter.provide_data. "Label_shapes (List of (str, tuple)) –typically is Data_iter.provide_label.for_training(bool) –default is True. Whether The executors should is bind for training.Input_need_grad(bool) –default is False. Whether the gradients to the input data need to is computed. Typically this isn't needed. But this might is needed when implementing composition of modules.
Force_rebind ' (bool) –default is False. This function does the executors are already binded. But with this True, the executors is forced to rebind.Shared_module(Module) –default is None. This is used in bucketing. When isn't None, the shared module essentially corresponds to a different bucket–a module with different symbol but with T He same sets of parameters, e.g. unrolled Rnns with different).Grad_req(str, List of STR, dict of STR to str) –requirement for gradient accumulation. Can is ' write ', ' add ', or ' null ' (Default to ' write '). Can is specified globally (str) or for each argument (list, dict).3. Module

The module API, defined in the module (or simply MoD) package, provides a intermediate and high-level for interface Rming computation with a Symbol. One can roughly a module is the machine which can execute a program defined by a Symbol.

The class module. Module is a commonly used module, which accepts a Symbol as the input:

data = mx.symbol.Variable (' data ')
fc1  = mx.symbol.FullyConnected (data, name= ' FC1 ', num_hidden=128)
Act1 = Mx.symbol.Activation (FC1, name= ' relu1 ', act_type= "Relu")
fc2  = mx.symbol.FullyConnected (Act1, name= ' FC2 ', num_hidden=10)
Out  = Mx.symbol.SoftmaxOutput (FC2, name = ' Softmax ')
mod = Mx.mod.Module (out)  # Create A module by given a Symbol

Assume there is a valid mxnet data iterator data. We can initialize the module:

Mod.bind (Data_shapes=data.provide_data,
         Label_shapes=data.provide_label)  # Create memory by given input  Shapes
mod.init_params ()  # Initial parameters with the ' default random initializer Now ' the
module is able to Compute. We can call high-level APIs to train and predict:

mod.fit (data, num_epoch=10, ...)  # train
mod.predict (new_data)  # predict on new data

Or use intermediate APIs to perform step-by-step computations

Mod.forward (Data_batch)  # Forward on the provided data batch
Mod.backward ()  # Backward to calculate the Gradi Ents
mod.update ()  # update parameters using the default optimizer

A detailed tutorial is available at http://mxnet.io/tutorials/python/module.html. 4. Kvstore

Provides basic push&pull operations over multiple devices (GPUs) on a single device.

# initialation >>> kv = mx.kv.create (' local ') # Create a local kv store. >>> shape = (2,3) # 2 rows 3 cols >>> kv.init (3, Mx.nd.ones (SHAPE) *2) >>> a = Mx.nd.zeros (shape  ) >>> Kv.pull (3, out = a) >>> print a.asnumpy () [2.
 2.2.]  [2.2.  2.]] # Push, Aggregation and Updater >>> Kv.push (3, Mx.nd.ones (SHAPE) *8) >>> Kv.pull (3, out = a) # pull  Out of the value >>> print a.asnumpy () [8.
 8.8.]  [8.8. 8.]] # You can push multiple values into the same key, # where Kvstore ' a sums all of these values, # and then Pushe s the aggregated value, as follows: >>> cpus = [Mx.cpu (i) for I in range (4)] >>> B = [Mx.nd.ones (Shape,  CPU) for CPUs in CPUs] >>> Kv.push (3, B) >>> Kv.pull (3, out = a) >>> print a.asnumpy () [4.
 4.4.]  [4.4. 4.]] # You can replace the ' default to control ' how-to merged >>> def update (key, input, stored): >>&gt    ; Print "Update on key:%d"% key >>> stored + = input * 2 >>> kv._set_updater (update) >>> KV  . Pull (3, out=a) >>> print A.asnumpy () [4.
 4.4.]  [4.4. 4.]] >>> Kv.push (3, Mx.nd.ones (SHAPE)) Update on Key:3 >>> kv.pull (3, out=a) >>> print A.asnum  PY () [6.
 6.6.]  [6.6.

 6.]]
5. Data Loading

Class Mxnet.io.NDArrayIter (data, Label=none, batch_size=1, shuffle=false, last_batch_handle= ' pad ', data_name= ') Data ', label_name= ' Softmax_label ')

Iterating on either Mx.nd.NDArray or Numpy.ndarray.

Parameters: Data (array or List of array or dict of. String to array) –input Data label (array or list of array or dict of string To array, optional) –input label batch_size (int) –batch size
Shuffle (bool, optional) –whether to shuffle the data Last_batch_handl (str, optional) –how to handle the last Batch, can be ' pad ', ' discard ' or ' roll_over '. ' Roll_over ' is intended to training and can cause problems if used for prediction. Data_name (str, optional) –the data name label_name (str, optional) –the label name

A data iterator reads data batch by batch:

>>> data = Mx.nd.ones ((100,10))
>>> nd_iter = mx.io.NDArrayIter (data, batch_size=25)
> >> for batch in Nd_iter:
...     Print (Batch.data)
[<ndarray 25x10 @cpu (0)]
[<ndarray 25x10 @cpu (0)]
[<ndarray 25x10 @ CPU (0)
[<ndarray 25x10 @cpu (0)]

In addition, a iterator provides information about the batch, including the shapes and name:

>>> nd_iter = mx.io.NDArrayIter (data={' data ': Mx.nd.ones ((100,10))},
...                             label={' Softmax_label ': Mx.nd.ones ((M.))},
...                             batch_size=25)
>>> print (nd_iter.provide_data)
[Datadesc[data, (10L), <type ' Numpy.float32 ' >,NCHW]]
>>> print (Nd_iter.provide_label)
[Datadesc[softmax_label, (25,), <type ' Numpy.float32 ' >,NCHW]
6 optimization:initialize and update weights
Related Article

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.