Stanford UFLDL Tutorial Fine-tune multilayer self-coding algorithms _stanford

Source: Internet
Author: User
Fine-tuning multilayer self-coding algorithm Contents [hide] 1 Introduction 2 general Strategy 3 using reverse propagation method for fine tuning 4 Chinese-English translator introduction

Fine tuning is a common strategy in depth learning, which can greatly enhance the performance of a stack of self coded neural networks. From a higher point of view, fine-tuning treats all layers of a stack-coded neural network as a model, so that all weights in the network can be optimized in each iteration. General strategy

Fortunately, the tools needed to implement a trimmer stack-coded neural network are available. In order to compute gradients for all layers in each iteration, we need to use the reverse propagation algorithm discussed in the Sparse automatic encoding section. Because the reverse propagation algorithm can be applied to any multilayer, in fact, the algorithm is applicable to any multi-layer stack-coded neural network. Using the reverse propagation method to fine-tune

For the convenience of readers, here's a brief description of how to implement the reverse propagation algorithm:


1. Make a feedforward pass, to the layer, layer until the output layer, using the formula defined in the forward propagation step to calculate the activation value (incentive response) on each layer.


2. To the output layer (layer), the order (when using the Softmax classifier, the Softmax layer satisfies: the category label for the input data, which is the conditional probability vector.) )


3. To the Order


4. Calculate the required partial derivative:


Note: We can assume that the output layer Softmax classifier is an additional layer, but the derivation process needs to be handled separately. Specifically, the "last Layer" feature of the network will go into the Softmax classifier. So, the second step in the derivative by calculation, which.


Chinese-English control stack self-coded neural network (can be considered as "multilayer automatic coder" or "Multilayer Automatic Coding Neural Network") stacked Autoencoder fine-tuning Fine tuning reverse propagation algorithm backpropagation algorithm feedforward transmission F Eedforward Pass activation value (may be considered for translation to "incentive response" or "response") activation from:http://ufldl.stanford.edu/wiki/index.php/%e5%be%ae%e8%b0% 83%e5%a4%9a%e5%b1%82%e8%87%aa%e7%bc%96%e7%a0%81%e7%ae%97%e6%b3%95

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.