What I-READ for deep-learning
Today, I spent some time on the new papers proposing a new from training very deep neural networks (Highway-networks) an d A new activation function for auto-encoders (Zero-bias autoencoders and the benefits of
Co-adapting FEATURES) which evades the use of any regularization methods such as Contraction or denoising.
Lets start with the first one. Highway-networks proposes a new activation type similar to LTSM Networks and they claim so this peculiar activation is R Obust to all choice of initialization scheme and learning problems occurred at very deep NNs. It is also incentive to see that they trained models with >100 number of layers. The basic intuition here's to learn a gating function attached to a real activation function that decides to pass the Act Ivation or the input itself. The formulation
T(x,Wt) is the gating function and< Span id= "mathjax-span-12" class= "Mrow" >h (x , w H) is the real activation. They use Sigmoid activation for gating and rectifier for the normal activation in the paper. I also implemented it with lasagne and tried to replicate the results (I aim to release the code later). It is really impressive to see It ability to learn for layers (this is the most I can for my PC).
The other paper Zero-bias autoencoders and the benefits of Co-adapting FEATURES suggests the use of non-biased rectifier u NITs for the inference of AEs. You can train your model with a biased rectifier Unit but at the inference time (test time), you should extract features B Y ignoring bias term. They show that doing so gives better recognition at Cifar DataSet. They also device a new activation function which have the similar intuition to Highway Networks. Again, there is a gating unit which thresholds the normal activation function.
The first equation is the threshold function with a predefined threshold (they use 1 for their experiments). The second equation shows the reconstruction of the proposed model. Pay attention, in this equation they use square of a linear activation for thresholding and they call the This model Tlin But they also use normal linear function which is called TREC. What's this activation does, here's to diminish, the small activations so, the model is implicitly regularized without an Y additional Regularizer. This was actually good for learning over-complete representation for the given data.
For more than this silly into, please refer to papers and warn me for any mistake.
These papers shows a new coming trend to deep learning community which are using complex activation functions. We can call it controlling each unit behavior in a smart-instead of letting them fire naively. My notion also agrees with this idea. I believe even more complication we need for smart units in our deep models like Spike and SLAP networks.
Related Posts:
- Some possible Matrix Algebra libraries based on/C + +
- What's special about rectifier neural units used in NN learning?
- Brief History of the machine learning
- Microsot introduced a new NN model that beats Google and the others
What I-READ for deep-learning