Reprint please specify the source:
Http://www.cnblogs.com/darkknightzh/p/6221622.html
Reference URL:
http://ju.outofmemory.cn/entry/284587
Https://github.com/torch/nn/blob/master/doc/criterion.md
Assuming that you have Model=setupmodel (a model of your own), and that you have your own training data input, the actual output outreal, and the loss function criterion (see the second URL), use the torch training process as follows:
1 -- given model, criterion, input, outreal 2 model:training () 3 model:zerogradparameters () 4 outpredict = model:forward (input)5 err= criterion:forward (outpredict, Outreal) 6 grad_criterion = Criterion:backward (outpredict, outreal)7Model: Backward (input, grad_criterion)8 model:updateparameters (learningrate)
The 1th line above assumes that the known parameters
Line 2nd is set to training mode
The 3rd row will save the gradient of each module in model 0 (prevent the previous interference this iteration)
The 4th line inputs input through model to get the predicted output outpredict
The 5th line calculates the error of the predicted output outpredict and the actual output outreal of the model under the current parameter by the loss function err
The 6th line calculates the gradient of the loss function by predicting the output outpredict and the actual output outreal grad_criterion
Line 7th Reverse calculates the gradient of each module in the model
Line 8th updates the parameters for each module of the model
Rows 3rd through 8th are required for each iteration.
(formerly) The training process of torch