After learning about the types of machine learning problems to be solved, we can start to consider the types of data collected and the machine learning algorithms we can try. In this post, we will introduce the most popular machine learning algorithms. It is helpful to look at the main algorithms to get a general idea of the methods that can be used.
There are many algorithms available. The difficulty is that there are different methods and extensions to these methods. This makes it very difficult to tell exactly what is an orthodox algorithm. In this post, I hope to give you two ways to think about and differentiate the algorithms you will encounter in this field.
The first approach is to divide algorithms based on learning, and the second approach is to divide Algorithms Based on form and function similarity (just like classifying similar animals as one type ). Both methods are useful.
Learning Methods
Based on its interaction with experience, environment, or any input data, an algorithm can model a problem in different ways. In machine learning and artificial intelligence textbooks, the popular practice is to consider an algorithm learning method first.
There are only a few main learning methods and learning models for algorithms. We will introduce them one by one, and give several algorithms and the types of problems they are suitable for solving as examples.
- Supervised Learning:Input data is called training data, which has known labels or results, such as spam, non-spam, or stock prices for a certain period of time. To determine the model parameters, you must pass a training process. In this process, the model requires prediction. When the prediction does not match, you must modify the prediction.
- Unsupervised learning: The input data does not contain tags or no known results. Model creation is based on the structure of input data. Examples of such problems include association rule learning and clustering. Examples of algorithms include the Apriori algorithm and the K-means algorithm.
- Semi-supervised learning:The input data is composed of tagged and unlabeled. Although a proper prediction model already exists, the model must be able to organize data by discovering potential structures while forecasting. Such problems include classification and regression. Typical algorithms include the promotion of some other flexible models that make assumptions about how to model unlabeled data.
- Reinforcement Learning:The input data is provided to the model as an incentive from the environment, and the model must respond. Feedback does not come from the training process as supervised learning does, but is used as an environmental penalty or reward. Typical problems include system and robot control. Examples of algorithms include Q-learning and temporal difference learning ).
When you process a large amount of data to model business decisions, you usually use supervised and unsupervised learning. Currently, a hot topic is semi-supervised learning. For example, it will be applied to image classification. The involved datasets are large but contain only a few labeled data.
Algorithm Similarity
Generally, algorithms are differentiated by similarity between functions and forms. For example, tree structure and neural network method. This is a useful classification method, but it is not perfect. Some algorithms can easily be classified into several categories, such as learning vector quantization, which is both a neural network-inspired method and an instance-based method. There are also some algorithms whose names describe both the problems they are dealing with and the names of a specific type of algorithms, such as regression and clustering. Because of this, you will see different classifications of algorithms from different sources. Just like the machine learning algorithm itself, there is no perfect model, and there is only enough good model.
In this section, I will list many popular machine learning algorithms in the most intuitive way I think. Although neither category nor algorithm is comprehensive and detailed, I think they are representative and will help you get a general understanding of the entire field. If you find that one or more algorithms are not listed, write them in the Reply and share them with you. Let's get started.
Regression Analysis
Regression is a modeling method. It first determines the amount of model prediction errors, and then uses this amount to repeatedly optimize the relationship between variables. Regression is the main application of statistics and is classified as statistical machine learning. This is confusing because we can use regression to refer to a type of problem and an algorithm. In fact, regression is a process. Here are some examples:
- Ordinary Least Square Method
- Logistic Regression
- Gradual Regression
- Multivariate adaptive spline regression (MARS)
- Loess)
Instance-Based Method
An instance-based learning model is used to model decision-making issues. These decisions are based on instances that are considered important or necessary in the training data. This type of method usually creates an example database, and then compares the new data with the database based on a similarity measurement standard to find the most matched item, and finally makes a prediction. Therefore, the instance-based method is also called the "win-by-win" method and the memory-based learning method. This method focuses on the representation of existing instances and the similarity between instances.
- K-Nearest Neighbor Algorithm (KNN)
- Learning vector quantization (LVQ)
- Self-Organizing Map (SOM)
Regularization Method
This is an extension of another method (usually regression analysis). It punishes a model with high complexity and tends to be a more general model. I listed some normalization methods here, because they are popular and powerful, and they are generally only simple improvements to other methods.
- Ridge Regression
- Lasso Algorithm)
- Elastic Network
Decision Tree Learning
The decision tree method is used to model the decision process. The decision is based on the actual values of attributes in the data. The decision tree structure forks until a specific record can be predicted. In classification or regression, we use data to train decision trees.
- Classification and regression tree algorithm (Cart)
- Three generations of iterative binary tree (ID3)
- C4.5 algorithm
- Chi-square automatic interactive view (chaid)
- Single-layer decision tree
- Random Forest
- Multivariate adaptive spline regression (MARS)
- Gradient propulsion machine (GBM)
Bayesian Algorithm
Bayes is an algorithm that explicitly applies Bayesian Theorem to classification and Regression Problems.
- Naive Bayes Algorithm
- Aode Algorithm
- Bayesian Reliability Network (BBN)
Core Function Method
Popular SVM algorithms are the most famous among core function methods. They are actually a series of methods. The core function method is concerned with how to map input data to a high-dimensional vector space. In this space, some classification or regression problems can be easily solved.
- Support Vector Machine (SVM)
- Radial Basis Function (RBF)
- Linear Discriminant Analysis (LDA)
Clustering Method
Just like regression, Clustering represents both a type of problem and a type of method. Clustering Methods are generally divided by modeling method: center-based or hierarchical structure. All methods use the internal structure of the data to classify the data as the most common one.
- K-means Method
- Maximum Expectation Algorithm (EM)
Association rule learning
Association rule learning is an algorithm used to extract rules. These rules can best explain the relationships between variables in the observed data. These rules can be used to discover important and commercially useful associations in large multidimensional data sets.
- Apriori algorithm
- Eclat Algorithm
Artificial Neural Network
Artificial Neural Networks are algorithms inspired by the structure and/or functions of biological neural networks. They are a type of pattern matching methods commonly used in regression and classification problems, but in fact this huge subclass contains deformation of hundreds of algorithms and algorithms, can solve various types of problems. Some popular methods include ):
- Sensor
- Back Propagation Algorithm
- CNN
- Adaptive map (SOM)
- Learning vector quantization (LVQ)
Deep Learning
The deep learning method is a modern online version of artificial neural networks using cheap and redundant computing resources. This method tries to build a much larger and more complex neural network. As mentioned above, many methods solve semi-supervised learning problems based on the labeled data with very limited big data sets.
- Restricted Boltzmann Machine (RBM)
- Deep Belief Network (DBN)
- Convolutional Neural Network
- Cascade automatic encoder (SAE)
Dimensionality Reduction Method
Like the clustering method, the Dimensionality Reduction Method tries to use the internal structure of the data to summarize or describe the data. The difference is that it uses less information in an unsupervised manner. This is helpful for visualizing high-dimensional data or simplifying data for subsequent supervised learning.
- Principal Component Analysis (PCA)
- Partial Least Square regression (PLS)
- Samon ing
- Multidimensional Scaling Analysis (MDS)
- Projection Pursuit
Integration Method
The integration method is composed of multiple weak models that are trained independently and Their prediction results are combined in some way to generate a total prediction. A lot of efforts are focused on selecting the types of learning models as submodels and how they are integrated. This is a very powerful technology, so it is also very popular.
- Boosting)
- Self-exhibition integration (bagging)
- Adaptive Promotion (Adaboost)
- Cascade general strategy (blending)
- Gradient propulsion machine (GBM)
- Random Forest
This is an example of optimal curve integration. Weak members are indicated by gray lines, and the predictions after integration are marked in red. This figure shows the temperature/ozone data. The curve comes from the loess model.
A journey to Machine Learning Algorithms]