Statistical learning is supervised learning (supervised learning), unsupervised learning (unsupervised learning), semi-supervised learning (semi-supervised learning) and intensive learning (reinforcement Learning) composition.
The statistical learning method includes the hypothesis space of the model, the selection criterion of the model and the algorithm of model learning, which is called the three elements of the statistical learning method: model, Strategy (strategy) and Algorithm (algorithm).
Computer science consists of three dimensions: systems, calculations, and information.
The model belongs to the mapping set of input space to output space, which is the hypothetical space (hypothesis spaces)
METHOD = model + strategy + algorithm
The ability to predict learning methods for unknown data is called generalization (generalization ability)
If we blindly pursue the predictive ability of training data, the complexity of the selected model is often higher than the true model, that is, overfitting (over-fiting). Over-fitting means that the model chosen by the learning room contains too many parameters, so that the model is well-predicted for known quantity, but it is poorly measured for unknown data, which can be said that the model selection is designed to avoid the proposed combination to improve the predictive ability of the model.
The typical method for model selection is regularization (regularization). Regularization is the implementation of the structure risk minimization strategy.
Another common model selection method is cross- validation (validation).
- Simple cross-validation
- K-fold cross-validation
- Leave a cross-validation
Generalization error
R e x P (F )= E p [L(Y ,F (X ))]= ∫ x? y L(y ,F (x))P (x,y )d xd y
The Generalization ability analysis of learning methods is often carried out by studying the upper bounds of the probability of generalization errors (generalization error bound).
Supervised learning methods can be divided into generative methods (generative approach) and discriminant methods (discriminative approach). The models learned are called generation models (generative model) and discriminant models (discriminative model)
The generation method is obtained by the data Learning Joint probability distribution P (x, y) and then the conditional probability distribution P (y| X) as the predictive model, the model is generated :
P (Y |X )= P(X,Y)p ( X )
This method is called a build method , which represents the generation relationship of output y produced by a given input x. such as: Naive Bayesian and Hidden Markov model.
The mutiny method is determined by the data learning Decision function f (X) or conditional probability p (y| X) as a predictive model, i.e. a discriminant model . Discriminant method is concerned about the given input x, what kind of output y should be predicted, typical discriminant models include: K nearest neighbor, Perceptron, decision tree, logistics regression, maximum entropy model, SVM, boosting and conditional random field.
In this column Python code sharing continuous update, Welcome to follow: Dream_angel_z blog
Copyright NOTICE: This article is for bloggers original articles, reproduced please indicate the source.
Machine learning-An introduction to statistical learning methods