Neural networks and deep learning (2): Gradient descent algorithm and stochastic gradient descent algorithm

Source: Internet
Author: User

This paper summarizes some contents from the 1th chapter of Neural Networks and deep learning.

learning with gradient descent algorithm (learning with gradient descent)1. Target

We want an algorithm that allows us to find weights and biases so that the output y (x) of the network can fit all the training input x.

2. Price functions (cost function)Define a cost function (loss function, objective function): The target functions, as follows:C: Called a two-time cost function , sometimes referred to as mean square error or MSE w:weight WeightB:bias biasN: Number of training data set instancesx: Enter a valueA: Output value (when x is input)|| v| |: Modulus of the vector v C (w,b) The smaller the better, the lower the difference between the predicted value and the real value of the output, the better.  then our goal is to: Minimize C (w,b).   The purpose of our neural network training is to find the weights and biases that can minimize the cost function C (w; b) of two times. 3. Gradient Descent

The minimization problem can be solved with gradient descent (gradient descent).

C (v) v has two variables v1, v2, usually can be solved with calculus, if V contains too many variables, can not be solved by calculus.

The gradient descent algorithm works by repeating the gradient ∇c and then moving in the opposite direction, tumbling down the valley.

That is, each drop to a place, it is necessary to calculate the next direction to go down.

Update rules for weights and biases:

4. Stochastic gradient descent algorithm (stochastic gradient descent)

Using the gradient descent algorithm in practice can make learning quite slow. This is because:

For each training instance x, the gradient vector ∇c is calculated. If the training data set is too large, it can take a long time and the learning process is too slow.

so the actual use of the random gradient descent algorithm (stochastic gradient descent).  Basic idea: Take a small sample (sample) from all the training instances: x1,x2,..., Xm (mini-batch) to estimate ∇c, greatly improving learning speed.  If the sample is large enough, substituting the updated equation:
then, re-select a Mini-batch to train until you run out of all the training instances and the epoch is complete. TsianlgeoSource: http://www.cnblogs.com/tsiangleo/This article copyright belongs to the author and the blog Park, welcome reprint, without consent to retain this paragraph, and in the article page obvious location to the original link. Welcome to correct and communicate.

Neural networks and deep learning (2): Gradient descent algorithm and stochastic gradient descent algorithm

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.