Fine-tuning multilayer self-coding algorithm Contents [hide] 1 Introduction 2 general Strategy 3 using reverse propagation method for fine tuning 4 Chinese-English translator introduction
Fine tuning is a common strategy in depth learning, which can greatly enhance the performance of a stack of self coded neural networks. From a higher point of view, fine-tuning treats all layers of a stack-coded neural network as a model, so that all weights in the network can be optimized in each iteration. General strategy
Fortunately, the tools needed to implement a trimmer stack-coded neural network are available. In order to compute gradients for all layers in each iteration, we need to use the reverse propagation algorithm discussed in the Sparse automatic encoding section. Because the reverse propagation algorithm can be applied to any multilayer, in fact, the algorithm is applicable to any multi-layer stack-coded neural network. Using the reverse propagation method to fine-tune
For the convenience of readers, here's a brief description of how to implement the reverse propagation algorithm:
1. Make a feedforward pass, to the layer, layer until the output layer, using the formula defined in the forward propagation step to calculate the activation value (incentive response) on each layer.
2. To the output layer (layer), the order (when using the Softmax classifier, the Softmax layer satisfies: the category label for the input data, which is the conditional probability vector.) )
3. To the Order
4. Calculate the required partial derivative:
Note: We can assume that the output layer Softmax classifier is an additional layer, but the derivation process needs to be handled separately. Specifically, the "last Layer" feature of the network will go into the Softmax classifier. So, the second step in the derivative by calculation, which.
Chinese-English control stack self-coded neural network (can be considered as "multilayer automatic coder" or "Multilayer Automatic Coding Neural Network") stacked Autoencoder fine-tuning Fine tuning reverse propagation algorithm backpropagation algorithm feedforward transmission F Eedforward Pass activation value (may be considered for translation to "incentive response" or "response") activation from:http://ufldl.stanford.edu/wiki/index.php/%e5%be%ae%e8%b0% 83%e5%a4%9a%e5%b1%82%e8%87%aa%e7%bc%96%e7%a0%81%e7%ae%97%e6%b3%95