ML Basic Knowledge

Source: Internet
Author: User

A computer program was said to learn from experience E with respect to some task T and some performance measure p, if it p Erformance on T, as measured by P improves with experience E

ML Algorithms Overview
    • Supervised learning <= "teach" program
      • Given "Right answers" data and then predict
      • Regression:predict
    • Unsupervisedlearning <= Let it learn by itself
      • Given data without labels, then find some structures in the data
    • Others:reinforcement Learning, Recommender systems

Regression Overivew

To get the prediction model, we need to define the Hythontheis function, and determine the parameters

    • hythonthesis Function & Cost function
      • hypothesis function hθ (x)
      • Cost Function J (Θ)
    • Gradient descent

    • Newton ' s method

Linear Regression
    • hypothesis function hθ (x) =θtx
    • Gradient Descent for linear regression

    • Feature Scaling
      • Make sure features is on similar scales
    • Learning Rateα
      • Pick the one seems to get J (theta) to decrease fastest
    • Features & Polynomial Regession
    • Normal equation
      • Too many features
        • Regularization or delete some
        • Redundent features (e.g. linear dependent features)
Logistic Regression
    • hypothesis function: [0,1]
    • Gradient Descent & Newton ' s method for logisitic regression

regularization*

Regularizatio (regularization) is intended to eliminate the overfitting (overfitting) problem. Because too many parameters will cause our model complexity to rise, easy to fit, that is, our training error will be very small. But the small training error is not our ultimate goal, our goal is to hope that the model test error is small, that is, to accurately predict new samples. Therefore, we need to ensure that the model is "simple" based on the minimization of training errors, so that the resulting parameters have good generalization performance (that is, the test error is also small), and the model "simple" is the rule function to achieve.

To put it simply, we need to tradeoff! between a small training error (target 1) and a simple model (target 2).

    • Overfitting problem (too many features)

  

    • regularized linear regression

    • regularized Logistic regression

    • Regularization Penalty & L2 Norm *
Reference
    • Http://www.52ml.net/12019.html
    • http://blog.csdn.net/zouxy09/article/details/24971995/

ML Basic Knowledge

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.