Linear programming, gradient descent, normal equations-Stanford ml public Lesson Note 1-2

Source: Internet
Author: User



Several FAQs:

1, why the loss function using the least squares, rather than the absolute form, also do not use the least three multiplication? The answer to this question, the following course will give, mainly from the maximum likelihood function from the perspective of the rationalization of the explanation, if you ask, the maximum likelihood function represents a scientific, reasonable? Have you ever heard of the law of large numbers and the law of extreme centers? It feels a bit like a philosophical question.

2, the gradient descent method refers to the study rate of the problem, our objective function is a convex two times function (that is, the shape of the bowl you eat), we know that the learning rate of the General Assembly cause concussion, too small will go slow, then there is no need to dynamically adjust the size of the learning rate, just start to decline when the first big, This is not necessary, the reason is that the formula is (the learning rate * gradient) and the gradient is gradually reduced. However, in the actual algorithm code, Python version of "machine learning combat" in p82-83 gives an improved strategy, the learning rate is gradually declining, but not strictly down, part of the code is:

For J in Range (Numiter):

For I in range (m):

alpha = 4/(1.0+j+i) +0.01

so Alpha decreases 1/(j+i) every time, and when J<<max (i), it is not strictly degraded. This is how the simulated annealing algorithm is done.

3. Can the random gradient drop find the value that minimizes the cost function? Not necessarily, but as the number of iterations increases, it will hang around the optimal solution, but this value is sufficient for us, and machine learning itself is not the 100% correct algorithm.

4. How much data do we think is bigger? What is the amount of data that a batch gradient drop can typically withstand? We generally believe that around 1000 of each characteristic value, but everything is not absolute, to combine their own data characteristics to try.

5, since there are normal group equations, can be directly solved, why use gradient descent method? Because the normal equations involve matrix inversion operations, but not at any time the inverse matrix exists, such as the number of samples less than the number of eigenvalues is m<n, in addition, when the sample number is large, the eigenvalues are many, this is a huge matrix, it is obvious that the direct solution is not advisable.

6, random gradient decline is also called online learning mode, online and offline The biggest difference is that offline is batch, disposable processing, and online more like streaming.

Linear programming, gradient descent, normal equations-Stanford ml public Lesson Note 1-2

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.