A computer program was said to learn from experience E with respect to some task T and some performance measure p, if it p Erformance on T, as measured by P improves with experience E
ML Algorithms Overview
- Supervised learning <= "teach" program
- Given "Right answers" data and then predict
- Regression:predict
- Unsupervisedlearning <= Let it learn by itself
- Given data without labels, then find some structures in the data
- Others:reinforcement Learning, Recommender systems
Regression Overivew
To get the prediction model, we need to define the Hythontheis function, and determine the parameters
- hythonthesis Function & Cost function
- hypothesis function hθ (x)
- Cost Function J (Θ)
- Gradient descent
Linear Regression
- hypothesis function hθ (x) =θtx
- Gradient Descent for linear regression
- Feature Scaling
- Make sure features is on similar scales
- Learning Rateα
- Pick the one seems to get J (theta) to decrease fastest
- Features & Polynomial Regession
- Normal equation
- Too many features
- Regularization or delete some
- Redundent features (e.g. linear dependent features)
Logistic Regression
- hypothesis function: [0,1]
- Gradient Descent & Newton ' s method for logisitic regression
regularization*
Regularizatio (regularization) is intended to eliminate the overfitting (overfitting) problem. Because too many parameters will cause our model complexity to rise, easy to fit, that is, our training error will be very small. But the small training error is not our ultimate goal, our goal is to hope that the model test error is small, that is, to accurately predict new samples. Therefore, we need to ensure that the model is "simple" based on the minimization of training errors, so that the resulting parameters have good generalization performance (that is, the test error is also small), and the model "simple" is the rule function to achieve.
To put it simply, we need to tradeoff! between a small training error (target 1) and a simple model (target 2).
- Overfitting problem (too many features)
- regularized linear regression
- regularized Logistic regression
- Regularization Penalty & L2 Norm *
Reference
- Http://www.52ml.net/12019.html
- http://blog.csdn.net/zouxy09/article/details/24971995/
ML Basic Knowledge