Both machine learning, pattern recognition, data mining, statistical learning, computer vision, speech recognition, and natural language processing all involve algorithms.
1. Tree: Decision Trees (decision tree) is a kind of graphic method of using probability analysis to evaluate the risk of a project and to judge its feasibility by means of forming decision tree to find the probability of net present value greater than or equal to zero. Because this decision-making branch is drawn like a tree's branches, it is called a decision tree. In machine learning, a decision tree is a predictive model that represents a mapping between object properties and object values. Entropy = The degree of clutter in the system, using the algorithm ID3, C4.5 and C5.0 spanning tree algorithms use entropy. This measure is based on the concept of entropy in the theory of informatics.
ID3 algorithm: Web link
Decision Tree: Web Links
Introduction and application of decision tree algorithm based on R language and SPSS: Web Links
Machine learning from getting started to abandoning the decision Tree algorithm: Web links
Algorithm grocer--decision Tree of classification algorithm (decision Trees): Web Links
Collective intelligence programming-decision tree Modeling (top): Web Links
Collective intelligence programming-decision tree Modeling (bottom): Web Links
2. Regression: In most machine learning courses, the regression algorithm is the first algorithm introduced. There are two reasons for this: I. The regression algorithm is simple, and it can be migrated smoothly from statistics to machine learning. Two. The regression algorithm is the cornerstone of several powerful algorithms behind it, and if you do not understand the regression algorithm, you cannot learn those powerful algorithms. The regression algorithm has two important sub-classes: Linear regression and logistic regression.
These seven kinds of regression analysis technology, learned not regret ~: Web Links
Talk about the Gaussian process regression: Web links
Simple R-Logistic regression wizard: Web Links
3. Bayes: Bayesian theorem for investment, decision-making, analysis is the information of the known related project B, and the lack of demonstration of project a direct data, the B project by the relevant state and the occurrence of probability analysis to deduce a project status and occurrence probability. Bayesian formula (published in 1763): P (h[i]/a) =p (H[i]) *p (A│h[i])/{P (h[1]) *p (a│h[1]) +p (h[2]) *p (a│h[2]) +...+p (H[n]) *p (A│h[n])}
Algorithm grocer--Bayesian network of classification algorithm (Bayesian networks): Web link
Algorithm grocer--naive Bayesian classification of classification algorithm (Naive Bayesian classification): Web link
Naive Bayesian method: Web Links
Construction of multiple Bayesian models and implementation of text categorization: Web links
Naive Bayesian classification of Spam SMS identification: Web Links
R language and Data Analysis III: Classification algorithm 1: Web links
4 SVM: Support vector machine algorithm is a classical algorithm which is born in the statistical learning Circle, and also in the machine learning circle.
Support Vector machines: Web Links
5 Neural Networks: Neural networks (also known as artificial neural Networks, Ann) algorithms are very popular in the machine learning World in the 80, but declined in the middle of the 90. Now, carrying the "deep learning" of the potential, the neural network re-installed back to become one of the most powerful machine learning algorithms, on this basis there are rnn,cnn and so on.
Reverse propagation Neural Network minimalist introduction: Web Links
Application of Recursive neural Network (RNN) in semantic recognition: Web links
BP neural network model and Learning algorithm: Web link
6 cluster –KNN, K-means, EM etc:
K Nearest Neighbor Method (KNN): Web link
The KNN implementation of Python combat: Web Links
Algorithm grocery store--k mean Clustering (K-means): Web Links
EM algorithm: Web link
7 dimensionality reduction: In many algorithms, the reduced-dimension algorithm becomes a part of data preprocessing, such as PCA. In fact, there are some algorithms without dimensionality reduction pretreatment, in fact, it is very difficult to get good results.
Four major machine learning dimensionality reduction algorithms: PCA, LDA, LLE, Laplacian eigenmaps Web Links
8 Association Rule algorithm: Association algorithm is a kind of important algorithm in data mining. In 1993, R.agrawal and others first put forward the problem of mining association rules between items in customer transaction data, the core of which is the recursive method based on the two-stage frequent set idea. This association rule belongs to single-dimension, single layer and Boolean association rules in classification, and the typical algorithm is Aprior algorithm.
Fp_growth algorithm: Web link
Aprior Algorithm of association rules (shopping Basket analysis): Web Links
9 recommended algorithms can be broadly divided into three categories: Content-based recommendation algorithm, collaborative filtering recommendation algorithm and knowledge-based recommendation algorithm.
Explore the secrets of the recommended engine, part 1th: Recommended engines: Web Links
Explore the secrets of the recommended engine, part 2nd: In-depth recommendation engine-related algorithms-Collaborative filtering: Web Links
Explore the secrets of the recommended engine, part 3rd: In-depth recommendation engine-related algorithms-Clustering: Web Links
Other
Community Division--label propagatio: Web Links
Perceptual Machines: Web Links
A text to understand hmm (hidden Markov model): Web Links
Reprint Please specify: Everyone is a simple classification of data cafes» Algorithms-After reading these immediately feel the algorithm is no longer so mysterious
Simple collation of the [go] algorithm. Common algorithms for Big Data