Mxnet parameter Regular

Source: Internet
Author: User
Tags prev mxnet
get_internals ()

Gets a new grouped symbol Sgroup. The output of Sgroup is a list of outputs of the internal nodes.

>>> a = Mx.sym.var (' a ')
>>> b = Mx.sym.var (' b ')
>>> C = a + b
>>> d = c.ge T_internals ()
>>> D

>>> d.list_outputs ()
[' A ', ' B ', ' _plus4_output ']
Def l2_penalty (W, b): print (w,b) # return Mx.sym.sum (Mx.sym.square (mx.sym.Variable (w))) + Mx.sym.sum (mx.sym.square
    (mx.sym.Variable (b))) Return Mx.sym.sum (Mx.sym.square (w)) + Mx.sym.sum (Mx.sym.square (b)) def get_symbol (): ... fc3 = Mx.sym.FullyConnec Ted (Name= ' fc3 ', Data=dropout2, num_hidden=num_classes) If Dtype = ' float16 ': fc3 = Mx.sym.Cast (data= FC3, dtype=np.float32) output = Mx.sym.softmax (DATA=FC3, Axis=1, name= ' Softmax_layer ') print (output.get_internals () [' Conv1_weight ']) Mate_cnn_fc = [L2_penalty (Output.get_internals () [Output.list_arguments () [i]], output.get_ Internals () [Output.list_arguments () [i+1]]) for I in Range (1,len (output.list_arguments ()), 2)] Mates_sum = Mx.sym.add_n (*MATE_CNN_FC) loss = Mx.sym.mean (emd_l2 (output, label, num_classes)) + Weightdecay * mates_sum Emd2_loss = mx.sy
    M.makeloss (loss, name= ' loss ') Pred_loss = Mx.sym.Group ([Mx.sym.BlockGrad (Output, name= ' pred '), Emd2_loss]) # SOFTmax = Mx.sym.SoftmaxOutput (data=fc3, name= ' Softmax ') # return Softmax return Pred_loss 
#网络定义 def GETMTL (Sym, layer_name): All_layers = sym.get_internals () flat = All_layers[layer_name + ' _output '] Pred_gender = mx.symbol.FullyConnected (Data=flat, num_hidden=2, name= ' Pred_gender ') # Pred_gender = Mx.symbol.FullyCo Nnected (Data=flat, num_hidden=1, name= ' pred_gender ') pred_age = mx.symbol.FullyConnected (Data=flat, Num_hidden=1, Nam E= ' pred_age ') Pred_mask = mx.symbol.FullyConnected (Data=flat, num_hidden=2, name= ' pred_mask ') Pred_glass = MX.SYMB Ol. Fullyconnected (Data=flat, num_hidden=2, name= ' pred_glass ') Pred_sunglass = mx.symbol.FullyConnected (Data=flat, Num_ hidden=2, name= ' pred_sunglass ') Pred_hat = mx.symbol.FullyConnected (Data=flat, num_hidden=2, name= ' pred_hat ') LA BELs = mx.symbol.Variable (' Attr_label ') Label_gender = Mx.symbol.slice_axis (Data=labels, Axis=1, begin=0, end=1,name= ' slice01 ') Label_gender = Mx.symbol.Flatten (data=label_gender) Label_gender_reshape = Mx.symbol.Reshape (Data=label
   _gender, Shape= (-1,)) Loss_gender = Mx.symbol.SoftmaxOutput (Data=pred_gender, Label=label_gender_reshape, grad_scale=1, Use_ignore=true, Ignore_label=-1,name= ' Gender_out ') # Loss_gender = Mx.symbol.LogisticRegressionOutput (Data=pred_gender, Label=label _gender_reshape) * (Label_gender!=-1) label_age = Mx.symbol.slice_axis (Data=labels, Axis=1, Begin=1, end=2,name= ' SL
    Ice12 ') Label_age = Mx.symbol.Flatten (data=label_age) # label_age=label_age/50.0 # pred_age = pred_age/50.0 Label_age_reshape = Mx.symbol.Reshape (Data=label_age, shape= ( -1,)) # loss_age = Mx.symbol.LogisticRegressionOutput (d  Ata=pred_age, Label=label_age_reshape) * (Label_age!=-1) loss_age = Mx.symbol.Custom (Data=pred_age, Label=label_age, Op_type= ' l2_regression ') * (Label_age!=-1) label_mask = Mx.symbol.slice_axis (Data=labels, Axis=1, begin=2, End=3,na Me= ' Slice23 ') Label_mask = Mx.symbol.Flatten (data=label_mask) Label_mask_reshape = Mx.symbol.Reshape (data=label_ma SK, Shape= ( -1,)) Loss_mask =Mx.symbol.SoftmaxOutput (Data=pred_mask, Label=label_mask_reshape, grad_scale=1, Use_ignore=true, Ignore_label=-1, Name= ' mask_out ') Label_glass = Mx.symbol.slice_axis (Data=labels, Axis=1, begin=3, end=4,name= ' slice34 ') Label_gla SS = Mx.symbol.Flatten (data=label_glass) Label_glass_reshape = Mx.symbol.Reshape (Data=label_glass, shape= ( -1,)) Lo Ss_glass = Mx.symbol.SoftmaxOutput (Data=pred_glass, Label=label_glass_reshape, grad_scale=1, use_ignore=true, ignore _label=-1, name= ' glass_out ') Label_sunglass = Mx.symbol.slice_axis (Data=labels, Axis=1, begin=4, end=5,name= ' Slice45 ') ) Label_sunglass = Mx.symbol.Flatten (data=label_sunglass) Label_sunglass_reshape = Mx.symbol.Reshape (data=label_su Nglass, Shape= ( -1,)) Loss_sunglass = Mx.symbol.SoftmaxOutput (Data=pred_sunglass, Label=label_sunglass_reshape, Grad_  Scale=1, Use_ignore=true, ignore_label=-1,name= ' sunglass_out ') Label_hat = Mx.symbol.slice_axis (Data=labels, Axis=1, begin=5, end=6,name= ' slice56 ') label_hat = Mx.symbol.Flatten (data=label_hat) Label_hat_reshape = Mx.symbol.Reshape (Data=label_hat, shape= ( -1,)) Loss_h at = Mx.symbol.SoftmaxOutput (Data=pred_hat, Label=label_hat_reshape, grad_scale=1, Use_ignore=true, Ignore_label=-1, Name= ' Hat_out ') return Mx.symbol.Group ([Loss_gender, Loss_age,loss_mask,loss_glass,loss_sunglass, Loss_hat])
list_arguments ()

Lists all of the arguments in the symbol.

Example

>>> a = Mx.sym.var (' a ')
>>> b = Mx.sym.var (' b ')
>>> C = a + b
>> > c.list_arguments
[' A ', ' B ']
Returns: Args–list containing the names of all arguments    required to Co Mpute the symbol.
>>> data = mx.sym.Variable (' data ') >>> prev = mx.sym.Variable (' prev ') > >> FC1 = mx.sym.FullyConnected (data=data, name= ' fc1 ', num_hidden=128) >>> fc2 = mx.sym.FullyConnected (da Ta=prev, name= ' FC2 ', num_hidden=128) >>> out = Mx.sym.Activation (Data=mx.sym.elemwise_add (FC1, FC2), Act_type = ' Relu ') >>> out.list_arguments () [' Data ', ' fc1_weight ', ' Fc1_bias ', ' prev ', ' fc2_weight ', ' Fc2_bias '] > >> Out.infer_shape (Data= (10,64)) (none, none, none) >>> out.infer_shape_partial (Data= (10,64)) (10L,
64L), (128L, 64L), (128L,), (), (), ()], [(10L, 128L)], []) >>> # infers shape If you give information about FC2 >>> Out.infer_shape (Data= (10,64), prev= (10,128)) ((10L, 64L), (128L, 64L), (128L,), (10L, 128L), (128L,), ( 128L,)], [(10L, 128L)], []) 
-Parameters (for modules and parameters)-' Get_params () ': Return a tuple ' (Arg_params, Aux_params) '. Each of the those is a dictionary of the name to ' ndarray ' mapping. Those ' Ndarray ' always lives on CPU.  The actual parameters used for computing might live in other devices (GPUs), this function would retrieve (a copy of) the latest parameters. Therefore, modifying-' Set_params (Arg_params, Aux_params) ': Assign parameters to the devices doing th

E computation. def get_params (self): "" "Gets parameters, those are potentially copies of the the actual parameters used T

        o do computation on the device. Returns-------' (arg_params, Aux_params) ' A pair of dictionaries each mapping parameter Nam

        Es to Ndarray values.
        Examples-------->>> # An example of getting module parameters. >>> Print Mod.get_params () ({' Fc2_weight ':, ' fc1_Weight ':, ' Fc3_bias ':, ' fc3_weight ':, ' Fc2_bias ':, ' Fc1_bias ':}, {}) "" Raise Notimp Lementederror () def update (self): "" "Updates parameters according to the installed optimizer and the gradients Co

        Mputed in the previous Forward-backward batch.
        Examples-------->>> # An example of updating module parameters. >>> Mod.init_optimizer (kvstore= ' local ', optimizer= ' sgd ', ... optimizer_params= (' learning_rate ', 0.01 ) >>> Mod.backward () >>> mod.update () >>> print mod.get_params () [0] [' Fc3_weight '].asnumpy () [5.86930104e-03 5.28078526e-03-8.88729654e-03-1.08308345e-03 6.13054   074e-03 4.27560415e-03 1.53817423e-03 4.62131854e-03 4.69872449e-03-2.42400169e-03 9.94111411e-04
        1.12386420e-03...]] "" "Raise Notimplementederror () def save_params (self,fname): "" "Saves model parameters to file.

        Parameters----------fname:str Path to output param file.
        Examples-------->>> # An example of saving module parameters. >>> mod.save_params (' myfile ') "" "arg_params, Aux_params = Self.get_params () save_dict = {(' arg:%s '% k): V.as_in_context (CPU ()) for K, V in Arg_params.items ()} save_dict.update ({(' aux:%s '% k): v.as_i
        N_context (CPU ()) for K, v. in Aux_params.items ()}) Ndarray.save (fname, save_dict) def load_params (self, fname):

        "" "loads model parameters from file.

        Parameters----------fname:str Path to input param file.
        Examples-------->>> # An example of loading module parameters.
        >>> mod.load_params (' myfile ') "" "Save_dict = Ndarray.load (fname) arg_params = {} Aux_params = {} for k, value in Save_dict.items (): arg_type, name = K.split (': ', 1) if Arg_type = = ' arg ': arg_params[name] = value elif Arg_type = = ' aux ': aux_params[name] = val UE else:raise valueerror ("Invalid param file" + fname) self.set_params (Arg_params, a Ux_params def forward (self, Data_batch, Is_train=none): "" "Forward computation.
        IT supports data batches with different shapes, such as different batch sizes or different image sizes. If Reshaping of data batch relates to modification of symbol or module, such as changing image layout ordering or

        Switching from training to predicting, the module rebinding is required.
        Parameters----------Data_batch:databatch could is anything with similar API implemented. Is_train:bool Default is ' None ', which means ' is_train ' takes the value of ' self.for_training '.
        Examples-------->>> import mxnet as MX >>> from collections import Namedtuple >>> Batch = namedtuple (' Batch ', [' data ']) >>> data = mx.sym.Variable (' data ') ;>> out = data * 2 >>> mod = Mx.mod.Module (symbol=out, label_names=none) >>> mod.b IND (data_shapes=[(' Data ', (1))]) >>> mod.init_params () >>> data1 = [Mx.nd.ones (1, 1 0)] >>> mod.forward (Batch (data1)) >>> print mod.get_outputs () [0].asnumpy () [[2  .  2.2.  2.2.  2.2.  2.2.
        2.]] >>> # Forward with data batch of different shape >>> data2 = [Mx.nd.ones ((3, 5)]]  >>> Mod.forward (Batch (data2)) >>> print mod.get_outputs () [0].asnumpy () [2.  2.2.
         2.2.]  [2.2.  2.2.  2.] [2.  2.2. 2.2.]]
        "" "Raise Notimplementederror () def backward (self, out_grads=none):" "Backward computation. Parameters----------Out_grads:ndarray or List of ndarray, optional gradient on the
            Outputs to is propagated back.

        This parameter was only needed when bind was called on outputs that are not a loss function.
        Examples-------->>> # An example of backward computation. >>> mod.backward () >>> print mod.get_input_grads () [0].asnumpy () [[1.10182791e-05] [5]. 12257748e-06 4.01927764e-06 8.32566820e-06-1.59775993e-06 7.24269375e-06 7.28067835e-06-1.65902311e
        -05 5.46342608e-06 8.44196393e-07] ...] "" "Raise Notimplementederror () def get_outputs (self, Merge_multi_context=true):" "" Gets outputs of the P

        Revious forward computation. If ' Merge_multi_context ' is' True ', it's like ' [Out1, Out2] '.
        Otherwise, it returns out put of form ' [[Out1_dev1, Out1_dev2], [Out2_dev1, Out2_dev2]] '. All the output elements have type ' Ndarray '.

        When ' Merge_multi_context ' was ' False ', those ' Ndarray ' instances might live on different devices. Parameters----------Merge_multi_context:bool Defaults to ' True '. In the case when Data-parallelism is used, the outputs would be collected from multiple devices.
            A ' True ' value indicates that we should merges the collected results so this they look like a single

        Executor.
            Returns-------List of ' Ndarray ' or List of ' Ndarray '.
        Output examples-------->>> # An example of getting forward output. >>> print mod.get_outputs () [0].asnumpy () [[0.09999977 0.10000153 0.10000716 0.10000195 0.09999853 0 .09999743
           0.10000272 0.10000113 0.09999088 0.09999888] "" "Raise Notimplementederror () def Get_inpu T_grads (self, Merge_multi_context=true): "" "Gets the gradients to the inputs, computed in the previous backward Co

        Mputation. If ' Merge_multi_context ' is ' True ', it's like ' [Grad1, Grad2] '. Otherwise, it is like ' [[Grad1_dev1, Grad1_dev2], [Grad2_dev1, Grad2_dev2]] '. All the output elements have type ' Ndarray '.

        When ' Merge_multi_context ' was ' False ', those ' Ndarray ' instances might live on different devices. Parameters----------Merge_multi_context:bool Defaults to ' True '. In the case when Data-parallelism is used, the gradients would be collected from multiple devices.
            A ' True ' value indicates that we should merges the collected results so this they look like a single

        Executor. Returns-------List of Ndarray or List of Ndarray Input gradients.
        Examples-------->>> # An example of getting input gradients. >>> print mod.get_input_grads () [0].asnumpy () [[1.10182791e-05 5.12257748e-06 4.01927764e-06 8.32 566820e-06-1.59775993e-06 7.24269375e-06 7.28067835e-06-1.65902311e-05 5.46342608e-06 8.4
        4196393E-07] ...] "" "Raise Notimplementederror () class Mxnet.metric.CompositeEvalMetric (Metrics=none, name= ' composite ', Output_nam

Es=none, Label_names=none) [source] manages multiple evaluation metrics.
Parameters:metrics (List of evalmetric) –list of child metrics.
Name (str) –name of this metric instance for display. Output_names (List of STR, or None) –name of predictions that should is used when updating with update_dict.
By default, include all predictions. Label_names (List of STR, or None) –name of labels that should is used when updating and upDate_dict.

By default, include all labels. Examples >>> predicts = [Mx.nd.array ([[0.3, 0.7], [0, 1.], [0.4, 0.6]]]] >>> labels = [Mx.nd.array ([0 , 1, 1])] >>> eval_metrics_1 = mx.metric.Accuracy () >>> eval_metrics_2 = mx.metric.F1 () >>> Eva L_metrics = Mx.metric.CompositeEvalMetric () >>> for child_metric in [Eval_metrics_1, eval_metrics_2]: >> > Eval_metrics.add (child_metric) >>> eval_metrics.update (labels = labels, preds = predicts) >>> p Rint Eval_metrics.get () ([' Accuracy ', ' F1 '], [0.6666666666666666, 0.8]) View all variables of the symbol diagram in Mxnet, and their shape >>> Import mxnet as mx >>> >>> a = mx.sym.Variable (' data ') >>> B = mx.sym.FullyConnected (Data=a,na Me= ' FC1 ', num_hidden=100) >>> Data_

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.