Comment out the DA code of yusugomori --- dA. h

Source: Internet
Author: User

DA is short for "Denoising Autoencoders. Continue to annotate yusugomori and learn while commenting. After reading some DA materials, I basically "reprinted" in the front. There is always a question in the process of learning: What is the difference between DA and RBM? (Don't laugh. I'm not a "academic school" or a Deep Learning theory. If I look at Deep Learning in sequence, this problem may not occur.) Now I know about it. For details, see: [deep learning Study Notes] Autoencoder. Then I have another question: how is the DA weight update formula derived? I know it is a back-propagation algorithm, but the derivation of specific publicity and the solution of partial derivatives do not see any material with a specific formula, so I think the yusugomori code is correct.

 


The commented header file:


 

 // The Class of denoising auto-encoder  class dA  { public:     int N;          // the number of training samples      int n_visible;  // the number of visible nodes      int n_hidden;   // the number of hidden nodes      double **W;     // the weight connecting visible node and hidden node      double *hbias;  // the bias of hidden nodes      double *vbias;  // the bias of visible nodes   public:     // initialize the parameters      dA ( int,       // N           int,       // n_visible           int ,      // n_hidden           double**,  // W           double*,   // hbias           double*    // vbias           );     ~dA();      // make the input noised      void get_corrupted_input (                 int*,       // the original input 0-1 vector            -- input                  int*,       // the resulted 0-1 vector gotten noised    -- output                  double      // the p probability of noise, binomial test -- input                  );     // encode process: calculate the probability output from hidden node      // p(hi|v) = sigmod ( sum_j(vj * wij) + bi), it's same with RBM      // but different from RBM, it dose not generate 0-1 state from Bernoulli distribution      void get_hidden_values (                 int*,       // the input from visible nodes                  double*     // the output of hidden nodes                  );     // decode process: calculate the probability output from visiable node      // p(vi|h) = sigmod ( sum_j(hj * wij) + ci), it's same with RBM      // but different from RBM, it dose not generate 0-1 state from Bernoulli distribution       void get_reconstructed_input (                 double*,    // the input from hidden nodes                  double*     // the output reconstructed of visible nodes                  );     // train the model by a single sample      void train (                 int*,       // the input sample from visiable node                  double,     // the learning rate                  double      // corruption_level is the probability of noise                  );     // reconstruct the input sample      void reconstruct (                 int*,       // the input sample     -- input                  double*     // the reconstructed value -- output                  ); }; // The Class of denoising auto-encoderclass dA{public: int N;   // the number of training samples int n_visible; // the number of visible nodes int n_hidden; // the number of hidden nodes double **W;  // the weight connecting visible node and hidden node double *hbias; // the bias of hidden nodes double *vbias; // the bias of visible nodespublic: // initialize the parameters dA ( int,  // N   int,  // n_visible   int ,  // n_hidden   double**, // W   double*, // hbias   double* // vbias   ); ~dA(); // make the input noised void get_corrupted_input (    int*,  // the original input 0-1 vector   -- input    int*,  // the resulted 0-1 vector gotten noised -- output    double  // the p probability of noise, binomial test -- input    ); // encode process: calculate the probability output from hidden node // p(hi|v) = sigmod ( sum_j(vj * wij) + bi), it's same with RBM // but different from RBM, it dose not generate 0-1 state from Bernoulli distribution void get_hidden_values (    int*,  // the input from visible nodes    double*  // the output of hidden nodes    ); // decode process: calculate the probability output from visiable node // p(vi|h) = sigmod ( sum_j(hj * wij) + ci), it's same with RBM // but different from RBM, it dose not generate 0-1 state from Bernoulli distribution void get_reconstructed_input (    double*, // the input from hidden nodes    double*  // the output reconstructed of visible nodes    ); // train the model by a single sample void train (    int*,  // the input sample from visiable node    double,  // the learning rate    double  // corruption_level is the probability of noise    ); // reconstruct the input sample void reconstruct (    int*,  // the input sample  -- input    double*  // the reconstructed value -- output    );}; 

 

Related Article

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.