What I-READ for deep-learning

Source: Internet
Author: User

What I-READ for deep-learning

Today, I spent some time on the new papers proposing a new from training very deep neural networks (Highway-networks) an d A new activation function for auto-encoders (Zero-bias autoencoders and the benefits of
Co-adapting FEATURES) which evades the use of any regularization methods such as Contraction or denoising.

Lets start with the first one. Highway-networks proposes a new activation type similar to LTSM Networks and they claim so this peculiar activation is R Obust to all choice of initialization scheme and learning problems occurred at very deep NNs. It is also incentive to see that they trained models with >100 number of layers. The basic intuition here's to learn a gating function attached to a real activation function that decides to pass the Act Ivation or the input itself. The formulation

T(x,Wt) is the gating function and< Span id= "mathjax-span-12" class= "Mrow" >h (x , w H)  is the real activation. They use Sigmoid activation for gating and rectifier for the normal activation in the paper. I also implemented it with lasagne and tried to replicate the results (I aim to release the code later). It is really impressive to see It ability to learn for layers (this is the most I can for my PC).

The other paper Zero-bias autoencoders and the benefits of Co-adapting FEATURES suggests the use of non-biased rectifier u NITs for the inference of AEs. You can train your model with a biased rectifier Unit but at the inference time (test time), you should extract features B Y ignoring bias term. They show that doing so gives better recognition at Cifar DataSet.  They also device a new activation function which have the similar intuition to Highway Networks. Again, there is a gating unit which thresholds the normal activation function.

The first equation is the threshold function with a predefined threshold (they use 1 for their experiments). The second equation shows the reconstruction of the proposed model.  Pay attention, in this equation they use square of a linear activation for thresholding and they call the This model Tlin But they also use normal linear function which is called TREC. What's this activation does, here's to diminish, the small activations so, the model is implicitly regularized without an Y additional Regularizer. This was actually good for learning over-complete representation for the given data.

For more than this silly into, please refer to papers and warn me for any mistake.

These papers shows a new coming trend to deep learning community which are using complex activation functions. We can call it controlling each unit behavior in a smart-instead of letting them fire naively. My notion also agrees with this idea. I believe even more complication we need for smart units in our deep models like Spike and SLAP networks.

Related Posts:

    1. Some possible Matrix Algebra libraries based on/C + +
    2. What's special about rectifier neural units used in NN learning?
    3. Brief History of the machine learning
    4. Microsot introduced a new NN model that beats Google and the others

What I-READ for deep-learning

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.