An important Classifier in machine learning or pattern recognition is SVM. This is widely used in various fields. However, its computing complexity and training speed are the main reasons restricting its real-time computer applications. Therefore, many algorithms, such as SMO and kernel, are proposed.
But the regularized least-squares classification mentioned here is a classifier with the same effect as him. Relatively speaking, the calculation is relatively simple (we see that a regularized least-squares classification problem can be solved by solving a single system of linear
Equations .). Next, we will introduce it.
Policy functions:
Let's take a look at his strategy: Structure Risk Minimization function.
By using the kernel method and simplification, we get the final required function f * (x ):
This is the final requirement. We can use some common kernel functions to process the kernel functions (for example, Gaussian Kernel ). So how can we solve CI?
How to solve C:
We all know that in SVM, we use the hinge loss function ). But obviously here is the square loss function:
At the same time, we perform square acquisition on both sides of the above f * (X) and bring it to the initial policy function:
So we can solve this equation by performing the derivation, making the derivative equal to 0:
Is it easy to find exceptions!
Regularized least-squares classification (Regularization Least Squares classifier)