"Reprint" "code-oriented" Learning deep learning (iv) stacked Auto-encoders (SAE)

Source: Internet
Author: User

Today's introduction is another very important model of DL: SAE

Put this in the end, mainly because in UFLDL tutorial has been introduced in more detail, and the code is very simple (on the basis of NN)

Let's start with a basic structure of Autoencoder:

The basic meaning is a hidden layer of neural network, input and output are X, belongs to unsupervised learning

==========================================================================================

Basic Code

saesetup.m

[CPP]View Plaincopyprint?
    1. function SAE = Saesetup (size)
    2. for u = 2:numel (size)
    3. Sae.ae{u-1} = Nnsetup ([Size (u-1) size (u) size (u-1)]);
    4. End
    5. End
function SAE = Saesetup (size) for    u = 2:numel (size)        sae.ae{u-1} = Nnsetup ([Size (u-1) size (u) size (u-1)]);    EndEnd

Saetrain.m

[CPP]View Plaincopyprint?
  1. function SAE = Saetrain (SAE, X, opts)
  2. for i = 1:numel (sae.ae);
  3. DISP ([' Training AE ' num2str (i) '/' Num2str (Numel (sae.ae))]);
  4. Sae.ae{i} = Nntrain (Sae.ae{i}, x, X, opts);
  5. t = Nnff (Sae.ae{i}, x, x);
  6. x = t.a{2};
  7. %remove Bias Term
  8. x = X (:, 2:end);
  9. End
  10. End
function SAE = Saetrain (SAE, X, opts) for    i = 1:numel (sae.ae);        Disp ([' Training AE ' num2str (i) '/' Num2str (Numel (sae.ae))]);        Sae.ae{i} = Nntrain (Sae.ae{i}, x, X, opts);        t = Nnff (Sae.ae{i}, x, x);        x = t.a{2};        %remove bias term        x = x (:, 2:end);    EndEnd

In fact, each layer is a autoencoder, the value of the hidden layer as the input of the next layer

Various variants

In order not to have too little content in this article ... Now alone put some of its variants forward to say

Sparse Autoencoder:

This is the version ufldl, the code in Toolbox is basically the same as the part of the UFLDL exercise:

Used in NNFF.M: nn.p{i} = 0.99 * Nn.p{i} + 0.01 * mean (Nn.a{i}, 1); calculation

Use in nnbp.m

Pi = Repmat (nn.p{i}, Size (Nn.a{i}, 1), 1);

Sparsityerror = [Zeros (Size (nn.a{i},1), 1) nn.nonsparsitypenalty * (-nn.sparsitytarget./pi + (1-nn.sparsitytarget)./( 1-PI)];

Calculate Sparsityerror can

Denoising Autoencoder:

Denoising is actually on the basis of autoencoder, to input the x to add noise, is equivalent to dropout used in the input layer

The implementation in Toolbox is very simple:

In the NNTRAIN.M:

batch_x = batch_x.* (rand (Size (batch_x)) >nn.inputzeromaskedfraction)

That is, the size of the (nn.inputzeromaskedfraction) part of the X-0,denoising Autoencoder appears to be stronger than sparse autoencoder

Contractive auto-encoders:

This variant is "Contractive auto-encoders:explicit invariance during feature extraction" proposed

This paper also summarizes a bit of autoencoder, it feels good

The contractive autoencoders model is:

which

The HJ is a function that represents the hidden layer and uses it to differentiate the X

The paper says: This item is

Encourages the mapping to the feature space to being contractive in the neighborhood of the training data

The specific implementation is:

Code See: Author of the paper: Click to open the link

is mainly

jacobian (self,x):

_jacobi_loss():

_fit_reconstruction():

These functions and autoencoder discrepancies, in fact, it is relatively simple, not detailed

Summary:  referto Http://www.idsia.ch/~ciresan/data/icann2011.pdf, later have time to learn a bit ~

"Reprint" "code-oriented" Learning deep learning (iv) stacked Auto-encoders (SAE)

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.