Here are some general basics, but it's still very useful to actually do machine learning. As the key to the application of machine learning on current projects such as recommender systems and DSPs, I think data processing is very important because in many cases, machine learning algorithms are pre-requisites and require data.
Machine learning emphasizes three key words: algorithm, experience, and performance, as shown in the process.
It is shown that machine learning is the data through the algorithm to build models and evaluate the model, the performance of the evaluation if it meets the requirements of the model to test other data, if not required to adjust the algorithm to re-establish the model, the evaluation again, so the cycle, and finally get the satisfaction of experience to deal with other data.
1.2
Classification of machine learning
1.2.1
Supervised Learning
Supervision is the study of a function (model) from a given set of training data, which can be predicted based on this function (model) when new data arrives. The training set requirements for supervised learning include input and output, which can also be characterized and targeted. The goal of the training set is to be labeled (scalar) by the person. Under supervised learning, the input data is called "training data", each set of training data has a clear identification or results, such as anti-spam system "spam", "non-spam", the handwritten numeral recognition of "1", "2", "3" and so on. In the establishment of the predictive model, supervised learning establishes a learning process, compares the predicted results with the actual results of the "training data", and adjusts the predictive model continuously until the predicted results of the model reach an expected accuracy rate. Common supervised learning algorithms include regression analysis and statistical classification:
L Two Yuan classification is the basic problem of machine learning, and divides the test data into two classes, such as the judgment of spam, whether the mortgage is allowed and so on.
L Multivariate classification is a logical extension of the two-tuple classification. For example, in the case of the Internet stream classification, according to the classification of the problem, the Web page can be categorized as sports, news, technology, and so on.
Supervised learning is often used for classification because the goal is often to let the computer learn the classification system that we have created. Once again, digital recognition becomes a common sample of classification learning. In general, for those useful classification systems and easy-to-judge classification systems, classification learning is applicable.
Supervised learning is the most common technique for training neural networks and decision trees. Neural networks and decision tree techniques are highly dependent on the information given by pre-determined classification systems. For the neural network, the classification system is used to judge the network errors, then adjusts the network to adapt to it, and for the decision tree, the classification system is used to determine which properties provide the most information, so that it can solve the problem of classification system.
1.2.2
Unsupervised Learning
Compared with supervised learning, unsupervised learning training sets have no human-labeled results. In unsupervised learning, the data is not specifically identified, and the learning model is designed to infer some intrinsic structure of the data. Common application scenarios include learning about association rules and clustering. Common algorithms include the Apriori algorithm and the K-means algorithm. The goal of this type of learning is not to maximize utility functions, but to find the approximate points in the training data. Clustering can often find fairly good visual classifications that match assumptions, such as demographic-based aggregates that may form a rich aggregation in a group, as well as other forms of poverty aggregation.
Unsupervised learning looks very difficult: the goal is that we do not tell the computer how to do it, but rather let it (the computer) Learn how to do something. Unsupervised learning generally has two kinds of ideas: the first is to instruct the agent not to specify a clear classification, but in the success of the use of some form of incentive system. It is important to note that this type of training is often placed in the framework of decision issues, since the goal is not to produce a classification system but to make the most rewarding decisions. This approach is a good generalization of the real world, where agents can motivate and punish other actions.
Because unsupervised learning assumes that there are no pre-categorized samples, this can be very powerful in some cases, for example, our classification method may not be the best choice. A prominent example of this is the Backgammon (backgammon) game, which has a series of computer programs (such as Neuro-gammon and Td-gammon) that, through unsupervised learning, play the game over and over again, becoming better than the strongest human player. Some of the principles discovered by these programs have even surprised backgammon experts, and they work better than the backgammon programs that are trained using pre-categorized samples.
1.2.3
semi-supervised learning
Semi-supervised learning (semi-supervised learning) is a kind of machine learning mode between supervised learning and unsupervised learning, which is a key problem in the field of pattern recognition and machine learning. It mainly considers how to use a small number of labeling samples and a lot of unlabeled samples for training and classification problems. Semi-supervised learning is of great practical significance for reducing labeling cost and improving learning machine performance. There are five main algorithms: Probability-based algorithm, modified method based on existing supervisory algorithm, and direct dependence on clustering hypothesis, in this learning mode, the input data part is identified, the part is not identified, and the learning model can be used for prediction. But the model first needs to learn the internal structure of the data in order to organize the data reasonably to make predictions. The application scenarios include classification and regression, and the algorithm includes some extensions to the commonly supervised learning algorithms, which first attempt to model the non-identified data, and then predict the identified data based on the inference algorithm (Graph inference) or Laplace support vector machine (Laplacian SVM) and so on.
The time of semi-supervised learning classification algorithm is short, and there are many aspects without further research. Semi-supervised learning since its inception, mainly for the processing of synthetic data, noise-free sample data is the majority of the current semi-supervised learning methods used data, and in real life data used is mostly non-interference, usually more difficult to obtain pure sample data.
1.2.4
Intensive Learning
Intensive learning learns the completion of an action by observing it, and each action has an impact on the environment, and the learning object makes judgments based on the feedback of the observed surroundings. In this learning mode, input data as feedback to the model, unlike the monitoring model, the input data is only as a check model of the wrong way, in the reinforcement learning, the input data directly to the model, the model must be immediately adjusted. Common application scenarios include dynamic systems and robot control. Common algorithms include q-learning and time difference learning (temporal difference learning).
In the case of enterprise Data application, the most commonly used is the model of supervised learning and unsupervised learning. In the field of image recognition, semi-supervised learning is a hot topic because of the large number of non-identifiable data and a small amount of identifiable data. Reinforcement learning is more used in robot control and other areas where system control is required.
1.3
common algorithms for machine learning
Common machine learning algorithms are:
L Tectonic condition probability: regression analysis and statistical classification;
L Artificial neural network;
L Decision Tree;
L Gaussian process regression;
l linear discriminant analysis;
L Nearest Neighbor law;
L Perceptron;
L radial basis function core;
l Support vector Machine;
L construct probability density function by regeneration model;
l Maximum expectation algorithm;
L Graphical model: including Bayesian network and Markov with the airport;
L generative topographic Mapping;
L Approximate inference technology;
L Markov chain Monte Carlo method;
L variational method;
L Optimization: Most of the above methods use optimization algorithms directly or indirectly.
According to the function and form similarity of the algorithm, we can classify the algorithm, for example, tree-based algorithm, neural network based algorithm and so on. Of course, the scope of machine learning is very large, and some algorithms are difficult to classify into a certain category. For some categories, the same classification algorithm can be used for different types of problems, the following in some relatively easy to understand the way to resolve some of the main machine learning algorithms:
1.3.1
Regression Algorithm
The regression algorithm is a kind of algorithm that tries to use the measurement of error to explore the relationship between variables. Regression algorithm is a powerful tool for statistical machine learning. In the field of machine learning, people talk about regression, sometimes refers to a kind of problem, sometimes refers to a kind of algorithm, which often makes beginners confused. Common regression algorithms include: least squares (ordinary Least square), Logistic regression (logistic Regression), stepwise regression (stepwise Regression), multiple adaptive regression splines (multivariate Adaptive Regression splines) and local scatter smoothing estimates (locally estimated scatterplot smoothing).
1.3.2
an instance-based algorithm
Instance-based algorithms are often used to model decision problems, and such models often pick up a batch of sample data and then compare the new data with the sample data based on some approximation. Find the best match in this way. Thus, instance-based algorithms are often referred to as "winner-take-all" learning or "memory-based learning". Common algorithms include K-nearest Neighbor (KNN), Learning vector quantization (learning vector quantization, LVQ), and self-organizing mapping algorithm (self-organizing map,som)
1.3.3
Regularization Method
The regularization method is the extension of other algorithms (usually the regression algorithm), which adjusts the algorithm according to the complexity of the algorithm. The regularization method usually rewards the simple model and punishes the complex algorithm. Common algorithms include: Ridge Regression, Least Absolute Shrinkage and Selection Operator (LASSO), and elastic networks (Elastic Net).
1.3.4
Decision Tree Learning
Decision Tree algorithm uses tree structure to establish decision-making model according to the attribute of data, and decision tree model is often used to solve classification and regression problems. Common algorithms include: Classification and regression tree (classification and Regression tree, CART), ID3 (iterative Dichotomiser 3), C4.5, chi-squared Automatic Interaction Detection (CHAID), decision Stump, Machine Forest (Random Forest), multivariate adaptive regression spline (MARS), and gradient propulsion (Gradient boosting machine, GBM).
1.3.5
Bayesian Learning
Bayesian algorithm is a kind of algorithm based on Bayesian theorem, which is mainly used to solve the problem of classification and regression. Common algorithms include: naive Bayesian algorithm, average single-dependency estimation (averaged one-dependence estimators, Aode), and Bayesian belief Network (BBN).
1.3.6
kernel-based algorithms
The most famous of kernel-based algorithms is support vector machine (SVM). Kernel-based algorithms map input data to a higher-order vector space, where some classification or regression problems can be solved more easily. Common kernel-based algorithms include: Support Vector machines (SVM MACHINE,SVM), Radial basis functions (Radial Basis FUNCTION,RBF), and linear discriminant analysis (Linear discriminate analyses, LDA) and so on.
1.3.7
Clustering Algorithm
Clustering is like regression, sometimes people describe a class of problems, sometimes describing a class of algorithms. Clustering algorithms typically merge input data by either a central point or a hierarchical approach. All clustering algorithms attempt to find the intrinsic structure of the data in order to classify the data in the most common way. Common clustering algorithms include the K-means algorithm and the desired maximization algorithm (expectation Maximization,em).
1.3.8
Association Rule Learning
Association rule Learning finds useful association rules in a large number of multivariate datasets by finding rules that best explain the relationship between data variables. Common algorithms include Apriori algorithm and Eclat algorithm.
1.3.9
Artificial neural Network algorithm
Artificial neural network algorithm is a kind of pattern matching algorithm simulating biological neural network. Typically used to solve classification and regression problems. Artificial neural network is a huge branch of machine learning, there are hundreds of different algorithms (in which deep learning is one of the algorithms, which we will discuss separately). Important artificial neural network algorithms include: Perceptron Neural Networks (Perceptron neural network), reverse transfer (back propagation), Hopfield network, self-organizing mappings (self-organizing map, SOM), Learning vector quantization (learning vector quantization,lvq).
1.3.10
Deep Learning Algorithms
Deep learning algorithm is the development of artificial neural network, has won a lot of attention in the near future, especially Baidu began to exert deep learning, but also caused a lot of concern at home. In today's increasingly inexpensive computing power, deep learning attempts to build a much larger and more complex neural network. Many deep learning algorithms are semi-supervised learning algorithms used to handle large datasets with small amounts of data that are not identified. Common deep learning algorithms include: Restricted Boltzmann machines (Restricted Boltzmann machine, RBN), Deepin belief Networks (DBN), convolutional networks (convolutional network), Stack-type Automatic encoder (stacked auto-encoders).
1.3.11
reduce the dimension of the algorithm
Like the clustering algorithm, the reduced dimension algorithm tries to analyze the internal structure of the data, but the reduced dimension algorithm is an unsupervised learning method, which attempts to use less information to summarize or interpret the data. Such algorithms can be used to visualize high-dimensional data or to simplify data for supervised learning. Common algorithms include: Principal component analysis (Principle Component ANALYSIS,PCA), Partial least squares regression (partial Least Square regression,pls), Sammon mappings, Multidimensional scales (multi-dimensional scaling, MDS), projection tracking (Projection Pursuit), and more.
1.3.12
Integration Algorithms
The integrated algorithm trains the same sample independently with some relatively weak learning models, then integrates the results for overall prediction. The main difficulty of integration algorithm is how to integrate the independent weak learning models and how to integrate the learning results. This is a very powerful algorithm, but also very popular. Common algorithms include: Boosting, bootstrapped Aggregation (Bagging), AdaBoost, stacking generalization (stacked generalization, Blending), gradient propulsion (Gradient Boosting machine, GBM), random forest (randomly Forest).
Some common algorithms for machine learning